To read the full-text of this research, you can request a copy directly from the authors.
Abstract
Ten English speaking subjects listened to sentences that varied in sentential constraint (i.e., the degree to which the context of a sentence predicts the final word of that sentence) and event-related potentials (ERPs) were recorded during the presentation of the final word of each sentence. In the Control condition subjects merely listened to the sentences. In the Orthographic processing condition subjects merely listened to the sentences. In the Orthographic processing condition subjects decided, following each sentence, whether a given letter had been present in the final word of the preceding sentence. In the Phonological processing condition the subjects judged whether a given speech sound was contained in the terminal word. In the Semantic processing condition subjects determined whether the final word was a member of a given semantic category. A previous finding in the visual modality that the N400 component was larger in amplitude for low constraint sentence terminations than for high was extended to the auditory modality. It was also found that the amplitude of a N200-like response was similarly responsive to contextual constraint. The hypothesis that N400 amplitude would vary significantly with the depth of processing of the terminal word was not supported by the data. The "N200" recorded in this language processing context showed the classic frontocentral distribution of the N200. The N400 to spoken sentences had a central/centroparietal distribution similar to the N400 in visual modality experiments. It is suggested that the N400 obtained in these sentence contexts reflects an automatic semantic processing of words that occurs even when semantic analysis is not required to complete a given task. The cooccurrence and topographical dissimilarity of the "N200" and N400 suggest that the N400 may not be a delayed or a generic N200.
To read the full-text of this research, you can request a copy directly from the authors.
... The PMN typically increases in amplitude in response to an unexpected phoneme, with listener expectations most often generated through task demands or context (e.g., cloze-probability sentences), and is purported to be distinct from surrounding languagerelated negativities including the MMN and N400 on the basis of its functional characteristics. The component is characterised as maximal 300 ms post stimulus presentation, although research documents activity attributed to a PMN response starting as early as 150 ms post stimulus onset [9][10][11], or as late as 425 ms [12]. In addition to latency, the topography of the PMN effect is also notably varied, and despite most consistently being interpreted as a frontal/ fronto-central component [10,[13][14][15], consideration of the available literature highlights an effect that most frequently spans frontal, central and parietal sites equally [16]. ...
... The component is characterised as maximal 300 ms post stimulus presentation, although research documents activity attributed to a PMN response starting as early as 150 ms post stimulus onset [9][10][11], or as late as 425 ms [12]. In addition to latency, the topography of the PMN effect is also notably varied, and despite most consistently being interpreted as a frontal/ fronto-central component [10,[13][14][15], consideration of the available literature highlights an effect that most frequently spans frontal, central and parietal sites equally [16]. Crucial to our understanding of the PMN, however, is its purported specific sensitivity to phonological processing-a key characteristic that the present paper serves to test. ...
... It is important to note here that whilst we interpret our results here as evidence for the PMNs specific sensitivity to language, we refrain from making any claims as to the feature(s) of language the PMN may or may not be selectively sensitive to, as the effect's insensitivity to lexicality was not tested in this study and remains a distinction that we feel requires further investigation.). Such a finding corroborates original interpretations of the response as a language-related negativity [10,11,13,15,30]. This said, although not supported by any statistical tests or analyses of the present study, visual observation of the topographical plots for the phoneme and note mismatch effects do suggest a certain degree of similarity between the responses. ...
The Phonological Mismatch Negativity (PMN) is an ERP component said to index the processing of phonological information, and is known to increase in amplitude when phonological expectations are violated. For example, in a context that generates expectation of a certain phoneme, the PMN will become relatively more negative if the phoneme is switched for an alternative. The response is comparable to other temporally-proximate components, insofar as it indicates a neurological response to unexpected auditory input, but remains considered distinct by the field on the basis of its proposed specific sensitivity to phonology. Despite this, reports of the PMN overlap notably, both in temporal and topographic distribution, with the Mismatch Negativity (MMN) and the N400, and limited research to date has been conducted to establish whether these extant distinctions withstand testing. In the present study, we investigate the PMN’s sensitivity to non-linguistic mismatches so as to test the response’s specific language sensitivity. Participants heard primes—three-syllable words—played simultaneously to three-note tunes, with the instructions to attend exclusively to either the linguistic or musical content. They were then tasked with removing the first syllable (phoneme manipulation) or note (music manipulation) to form the target. Targets either matched or mismatched primes, thus achieving physically identical note or phoneme mismatches. Results show that a PMN was not elicited during the musical mismatch condition, a finding which supports suggestions that the PMN may be a language-specific response. However, our results also indicate that further research is necessary to determine the relationship between the PMN and N400. Though our paper probes a previously unstudied dimension of the PMN, questions still remain surrounding whether the PMN, although seemingly language-specific, is truly a phonology-specific component.
... Recalling the aforementioned breakfast scenario, based on previous work in the field, we outline how three stages of language processing identified above, would plausibly be affected when hearing 'plate' following a gaze toward the mug. Firstly, if gaze leads to a prediction that 'mug' will be heard, then a word which fails to confirm this expectation may be expected to result in an increase in N200 amplitude (Connolly, Stewart, & Phillips, 1990;Hagoort & Brown, 2000). Similarly, expectation for 'mug' might result in more difficult retrieval of 'plate' from semantic memory, as expressed by the N400 component (Kutas & Federmeier, 2011;Van Berkum, Koornneef, Otten, & Nieuwland, 2007). ...
... The N200 component as a Phonological Mapping Negativity (PMN) (Spivey, Joanisse, & McRae, 2012) has been previously observed when there is a mismatch between the expected word form given the context and the actual word candidates that are consistent with the speech signal listeners perceive (Connolly et al., 1990;Hagoort & Brown, 2000). In the breakfast scenario, such a mismatch would arise on the first phoneme of the uttered word 'plate' where the onset of 'mug' would be expected based on the preceding gaze cue. ...
... In summary, we hypothesized that gaze modulates listeners' expectations for a referent to be mentioned, possibly even anticipating a specific word, predicting the modulation of three established ERP components. Previous research has shown that, if a specific word form is predicted, an attenuated N200 is observed when this prediction is confirmed (Connolly et al., 1990;Hagoort & Brown, 2000). Based on the interpretation of the N200 as a Phonological Mapping Negativity (Spivey et al., 2012), we hypothesized that the N200 effect is driven by the amount of information conveyed by the incoming phoneme. ...
... However, depending on the modality, the distribution can differ. Auditory word presentation, for example, can lead to a more central or even global scalp distribution (Connolly et al., 1990(Connolly et al., , 1992Connolly and Phillips, 1994;Bentin et al., 1993;Ackerman et al., 1994;McCallum et al., 1984;Holcomb and Anderson, 1993) and N400s on pictures can have a more frontal distribution (Holcomb and Mcpherson, 1994;Ganis et al., 1996). It was argued that component overlap with preceding components might influence the observed shifts in distribution. ...
... The N200 component as a phonological mismatch negativity (PMN) usually peaking between 250-300ms was first observed in works by Connolly et al. (1990Connolly et al. ( , 1992. In their experiments on contextually constrained spoken sentences this component preceded the N400 on sentence terminal words. ...
... The time-window of the PMN is slightly different depending on the source. The reported time-windows usually are 250 -300ms (Connolly et al., 1990(Connolly et al., , 1992, 150-250ms (Van Den Brink et al., 2001) and overarching both of them from 150 to 300ms (Praamstra and Stegeman, 1993). Although not always identified as a PMN, all reported findings link the N200/N250 to similar processes that are related to the phonological structure of stimuli. ...
... Recalling the aforementioned breakfast scenario, based on previous work in the field, we outline how three stages of language processing identified above, would plausibly be affected when hearing 'plate' following a gaze toward the mug. Firstly, if gaze leads to a prediction that 'mug' will be heard, then a word which fails to confirm this expectation may be expected to result in an increase in N200 amplitude (Connolly, Stewart, & Phillips, 1990;Hagoort & Brown, 2000). Similarly, expectation for 'mug' might result in more difficult retrieval of 'plate' from semantic memory, as expressed by the N400 component (Kutas & Federmeier, 2011;Van Berkum, Koornneef, Otten, & Nieuwland, 2007). ...
... The N200 component as a Phonological Mapping Negativity (PMN) (Spivey, Joanisse, & McRae, 2012) has been previously observed when there is a mismatch between the expected word form given the context and the actual word candidates that are consistent with the speech signal listeners perceive (Connolly et al., 1990;Hagoort & Brown, 2000). In the breakfast scenario, such a mismatch would arise on the first phoneme of the uttered word 'plate' where the onset of 'mug' would be expected based on the preceding gaze cue. ...
... In summary, we hypothesized that gaze modulates listeners' expectations for a referent to be mentioned, possibly even anticipating a specific word, predicting the modulation of three established ERP components. Previous research has shown that, if a specific word form is predicted, an attenuated N200 is observed when this prediction is confirmed (Connolly et al., 1990;Hagoort & Brown, 2000). Based on the interpretation of the N200 as a Phonological Mapping Negativity (Spivey et al., 2012), we hypothesized that the N200 effect is driven by the amount of information conveyed by the incoming phoneme. ...
Behavioral studies have shown that speaker gaze to objects in a co-present scene can influence listeners’ sentence comprehension. To gain deeper insight into the mechanisms involved in gaze processing and integration, we conducted two ERP experiments (N = 30, Age: [18, 32] and [19, 33] respectively). Participants watched a centrally positioned face performing gaze actions aligned to utterances comparing two out of three displayed objects. They were asked to judge whether the sentence was true given the provided scene. We manipulated the second gaze cue to be either Congruent (baseline), Incongruent or Averted (Exp1)/Mutual (Exp2). When speaker gaze is used to form lexical expectations about upcoming referents, we found an attenuated N200 when phonological information confirms these expectations (Congruent). Similarly, we observed attenuated N400 amplitudes when gaze-cued expectations (Congruent) facilitate lexical retrieval. Crucially, only a violation of gaze-cued lexical expectations (Incongruent) leads to a P600 effect, suggesting the necessity to revise the mental representation of the situation. Our results support the hypothesis that gaze is utilized above and beyond simply enhancing a cued object’s prominence. Rather, gaze to objects leads to their integration into the mental representation of the situation before they are mentioned.
... The time windows that entered statistical analyses were determined according to the literature, however, upon visual observation of the grand average waveforms, we decided to use an additional time window, i.e. 0-200 ms post recognition point, that demonstrated a prominent difference between the congruent and incongruent CT conditions. The auditory N400 has been reported to have a latency of 200-400 ms post recognition point [13,28,[70][71][72][73][74], whereas the literature on the auditory LAN component has reported no homogenous modality-specific latency shifts [75][76][77][78][79]. Therefore, two adjacent time windows were used to capture the N400 and LAN effects: 200-350 and 350-500 ms. The last time window (600-700 ms) was chosen based on the P600 literature [19,77]. ...
... Relative to the Correct condition, semantic incongruence between an adjective and a noun triggered a central negativity that started as early as 0-200 ms and became most prominent between 200 and 350 ms post recognition of the noun. The difference between the Correct and the Semantic violation conditions was compatible with the latency and the topographic distribution of the classic N400 effect [13,14,16,17,24,70,71,80]. Gender agreement violation between an adjective and a noun in the AN conditions, relative to the Correct condition, elicited a left lateralized negativity that started at around 200 ms, lasted for about 400 ms, and was followed by a central-parietal positivity at around 600-700 ms post recognition point of the noun. ...
... However, their topographic distributions and/or polarity differed from those of the classic effects. Sortal nouns preceded by incongruent compared to congruent determination elicited an ERP response with an N400-like topographic distribution but as a positive rather than a negative [13,14,16,17,24,70,71,80] deflection. Individual nouns preceded by incongruent compared to congruent determination elicited a left temporal negativity and right posterior positivity in the 0-200 ms time window and a bilateral posterior positivity in the 200-350 ms time window. ...
A recent semantic theory of nominal concepts by Löbner [1] posits that–due to their inherent uniqueness and relationality properties–noun concepts can be classified into four concept types (CTs): sortal, individual, relational, functional. For sortal nouns the default determination is indefinite (a stone), for individual nouns it is definite (the sun), for relational and functional nouns it is possessive (his ear, his father). Incongruent determination leads to a concept type shift: his father (functional concept: unique, relational)–a father (sortal concept: non-unique, non-relational). Behavioral studies on CT shifts have demonstrated a CT congruence effect, with congruent determiners triggering faster lexical decision times on the subsequent noun than incongruent ones [2, 3]. The present ERP study investigated electrophysiological correlates of congruent and incongruent determination in German noun phrases, and specifically, whether the CT congruence effect could be indexed by such classic ERP components as N400, LAN or P600. If incongruent determination affects the lexical retrieval or semantic integration of the noun, it should be reflected in the amplitude of the N400 component. If, however, CT congruence is processed by the same neuronal mechanisms that underlie morphosyntactic processing, incongruent determination should trigger LAN or/and P600. These predictions were tested in two ERP studies. In Experiment 1, participants just listened to noun phrases. In Experiment 2, they performed a wellformedness judgment task. The processing of (in)congruent CTs (his sun vs. the sun) was compared to the processing of morphosyntactic and semantic violations in control conditions. Whereas the control conditions elicited classic electrophysiological violation responses (N400, LAN, & P600), CT-incongruences did not. Instead they showed novel concept-type specific response patterns. The absence of the classic ERP components suggests that CT-incongruent determination is not perceived as a violation of the semantic or morphosyntactic structure of the noun phrase.
... The subsequent two and a half decades of research have demonstrated that there is a greater variety of language-related stimulus characteristics that modulate the N400 than initially imagined. Thus, within sentence contexts the N400 occurs to or is modulated by, for example, congruous words terminating low contextually constrained sentences (Connolly et al., 1990(Connolly et al., , 1992. It also occurs to open-class words in a developing sentence context and becomes progressively smaller to such words as the sentence context develops and provides contextual constraints (Van Petten, 1993). ...
... Traditional paradigms that use spoken sentence-ending words that do not integrate easily into the preceding sentence context by virtue of semantic incongruity or low cloze probability often elicit a fronto-central or equally distributed N400 and either a slight left- hemispheric asymmetry or no asymmetry (e.g., Connolly et al., 1990Connolly et al., , 1992Connolly & Phillips, 1994;Bentin et al., 1993;Ackerman et al.1994). However, other studies fail to see such a scalp distribution to speech stimuli (Holcomb & Neville, 1990;van den Brink et al., 2001;D'Arcy, et al., 2004). ...
... Early work on contextually constrained sentence effects on ERP to spoken sentences observed an N400 to semantically congruous words that terminated sentences of low contextual constraint. However, also observed was a negative-going response that occurred between 250-300 ms to all terminal words but was significantly larger to those ending the low contextually-constrained sentences that produced N400 responses (Connolly et al., 1990;. Speech comprehension is the classic example of a process that is dependent on a stimulus that unfolds over time. ...
... One question that arose in the earlier experiments (Connolly et al., 1990(Connolly et al., , 1992 was why the timing of the N400 in the auditory modality was virtually identical to that in the visual modality (e.g., Kutas & Van Petten, 1988) despite the fact that the presentation time of a word in the auditory modality took much longer than in the visual modality. In other words, an apparently semantic response (the N400) occurred at the same timing whether the eliciting word took 300 milliseconds to present (as in the auditory modality) or was presented completely in a matter of several milliseconds (as in the visual modality). ...
... While the masking technique differentially affected the PMN and the N400, in neither of the Connolly et al. (1990Connolly et al. ( , 1992 studies was there a direct manipulation of the element believed to carry the critical information necessary to trigger the PMN. We report here an experiment designed to affect the PMN directly by manipulating the initial phoneme of sentence terminal words and thereby test our hypothesis that the PMN reflects a phonological processing function that is sensitive to expectancies developed by the contextual constraint of the sentence. ...
... According to prior ERP literature, in spoken-language comprehension (van den Brink, Brown, & Hagoort, 2001;van den Brink & Hagoort, 2004), the N100 component mostly reflects bottom-up processing of the incoming signal, based on the acoustic properties of word onset. According to this view, the top-down processes driven by sentential context only begin to influence word recognition as early as 200 ms from the incoming word, as revealed by differential ERP responses in amplitude between strongly and weakly constraining contexts (Connolly, Phillips, Stewart, & Brake, 1992;Connolly, Stewart, & Phillips, 1990). Some authors (Connolly & Phillips, 1994;Connolly et al., 1990Connolly et al., , 1992Hagoort & Brown, 2000) proposed that the N200, occurring between 150 and 300 ms, is triggered by the mismatches at a phonological level with the lexical expectancies from the sentence context, whereas the N400, peaking around 400 ms after stimulus onset with a more posterior distribution across the scalp, reflects the consequences of contextually based expectations regarding upcoming words at a lexicosemantic level. ...
... According to this view, the top-down processes driven by sentential context only begin to influence word recognition as early as 200 ms from the incoming word, as revealed by differential ERP responses in amplitude between strongly and weakly constraining contexts (Connolly, Phillips, Stewart, & Brake, 1992;Connolly, Stewart, & Phillips, 1990). Some authors (Connolly & Phillips, 1994;Connolly et al., 1990Connolly et al., , 1992Hagoort & Brown, 2000) proposed that the N200, occurring between 150 and 300 ms, is triggered by the mismatches at a phonological level with the lexical expectancies from the sentence context, whereas the N400, peaking around 400 ms after stimulus onset with a more posterior distribution across the scalp, reflects the consequences of contextually based expectations regarding upcoming words at a lexicosemantic level. Additionally, ERP studies in written language have shown that the degree of semantic constraint does not modulate the N400 amplitude during the processing of semantically plausible words when they are not the most expected word for sentence (Kutas & Hillyard, 1984;Thornhill & Van Petten, 2012;Van Petten & Luka, 2012). ...
... This conclusion is in accordance with the study of Van Petten, Coulson, Rubin, Plante, & Parks (1999) showing one unique N400 wave, triggered by the top-down predictions due to the sentence context. Regarding the effects of semantic constraint, and based on Connolly and colleagues' results (Connolly et al., 1990(Connolly et al., , 1992, target words embedded in low semantically constraining sentence frames should evoke a response with greater negative amplitude in comparison to the same targets when embedded in high constraining sentence frames (hence, more predictable). This shift may begin as early as 250 ms (putatively, the N200 or early N400 effect) and persist over later processing stages around the N400 window. ...
This study addresses how top-down predictions driven by phonological and semantic information interact on spoken-word comprehension. To do so, we measured event-related potentials to words embedded in sentences that varied in the degree of semantic constraint (high or low) and in regional accent (congruent or incongruent) with respect to the target word pronunciation. The data showed a negative amplitude shift following phonological mismatch (target pronunciation incongruent with respect to sentence regional accent). Here, we show that this shift is modulated by sentence-level semantic constraints over latencies encompassing auditory (N100) and lexical (N400) components. These findings suggest a fast influence of top-down predictions and the interplay with bottom-up processes at sublexical and lexical levels of analysis.
... The authors interpreted the reduced N400 amplitude after high cloze probability words as reflecting the meaning-building effect of the prior context, which makes the processing of later words that fit the context easier. In spoken sentence comprehension, cloze probability has often been reported to affect the N400, but also an earlier electrophysiological component, the N200 [15, 19]. Connolly and colleagues [15, 19] observed that the amplitude of N200 was higher for words with low cloze probability than for words with high cloze probability. ...
... In spoken sentence comprehension, cloze probability has often been reported to affect the N400, but also an earlier electrophysiological component, the N200 [15, 19]. Connolly and colleagues [15, 19] observed that the amplitude of N200 was higher for words with low cloze probability than for words with high cloze probability. This negative shift was interpreted as reflecting the ease with which a word is decoded at the phonological level. ...
... In contrast to the group with only perceptual exposure to an unfamiliar accent, in the group who imitated the accent no difference in the N200 component between high-and low-cloze words was observed. As the N200 is generally interpreted as reflecting acoustic/phonological processes [15, 19], the absence of a cloze probability effect in the imitator group seems to indicate that imitation facilitates the processing of the acoustic/phonological properties of words spoken in an unfamiliar accent. In line with our predictions, the cloze probability × group interaction also showed that the benefit of an imitative behavior was stronger for low-cloze words that is, for words that are more difficult to predict from the preceding context. ...
imitating an unfamiliar accent on the processing of spoken
words embedded in sentential contexts produced in that
accent. The cloze probability effect in two groups of southern
French speakers after they had to either listen to or imitate
sentences spoken by a Belgian French speaker was tested.
Speakers who did not imitate the unfamiliar accent showed a
cloze probability effect on the phonological N200 wave, while
those who did imitate the accent showed no effect on this
component. Over a later time window, both groups showed a
cloze probability effect on the N400, which is associated with
lexical and semantic processing. Taken together, these results
give clear evidence for processing benefits from the imitation
of speech patterns, particularly at an acoustic/phonological
level of processing
... Another goal of the Kutas & Hillyard (1983) Kutas, Van Petten, & Besson, 1988;Neville, Kutas, Chesney, & Schmidt, 1986), word repetition (Besson, Kutas, & Van Petten, 1992;Rugg, 1990), content words as compared to the function words , the degree of fit of the content words into the preceding context (Besson, Faita, Czternasty, & Kutas, 1997;Brown, van Berkum, & Hagoort, 2000;Connolly et al., 1992;Connolly, Stewart, & Phillips, 1990), etc. ...
... The showed that the amount of deviation from the expected context played an important role at the early processing stages Connolly et al., 1992;Connolly et al., 1990;D'Arcy et al., 2004;Steinhauer & Connolly, 2008). However, we observed PMN exclusively in the memory task experiment. ...
... In the auditory domain, the N400 is larger (more negative) when words are preceded by unrelated words than when words are preceded by related words (i.e., a semantic priming effect, see Holcomb & Neville, 1990). The N400 has been argued to reflect cognitive/linguistic processing in response to auditory input (Connolly et al., 1992), and is proposed to be functionally distinct from the earlier N200 (Connolly, Stewart, & Phillips, 1990; Connolly et al., 1992; van den Brink & Hagoort, 2004; van den Brink et al., 2001). Looking at the literature on lexical context effects, and the literature on the effect of lip-read context, we see multiple clear parallels: In both cases, the context effect strongly influences how listeners interpret ambiguous or unclear speech. ...
... Given that the lexicality effect is thus similar to the effect observed with naturally timed stimuli, it appears that the N200 effect can survive the $800 ms delay, producing an N200 effect superimposed on the obligatory P2. Despite the clear central topography (as often found for the P2, see e.g.,), effects of lexicality were statistically alike across the scalp, providing additional support for the hypothesis that the effect is an N200-effect with a central distribution (Connolly et al., 1990) that is nonetheless statistically alike across the scalp (van den Brink et al., 2001). Although effects of lip-read context were largest in the 300–350 ms epoch, the same pattern of AV integration (i.e., more negative ERP amplitudes for AV–V than for A) started to appear at the P2 peak for a number of mid-central electrodes (e.g., C3, Cz, C4). ...
... In the auditory domain, the N400 is larger (more negative) when words are preceded by unrelated words than when words are preceded by related words (i.e., a semantic priming effect, see Holcomb & Neville, 1990). The N400 has been argued to reflect cognitive/linguistic processing in response to auditory input (Connolly et al., 1992), and is proposed to be functionally distinct from the earlier N200 (Connolly, Stewart, & Phillips, 1990; Connolly et al., 1992; van den Brink & Hagoort, 2004; van den Brink et al., 2001). Looking at the literature on lexical context effects, and the literature on the effect of lip-read context, we see multiple clear parallels: In both cases, the context effect strongly influences how listeners interpret ambiguous or unclear speech. ...
... Given that the lexicality effect is thus similar to the effect observed with naturally timed stimuli, it appears that the N200 effect can survive the $800 ms delay, producing an N200 effect superimposed on the obligatory P2. Despite the clear central topography (as often found for the P2, see e.g.,), effects of lexicality were statistically alike across the scalp, providing additional support for the hypothesis that the effect is an N200-effect with a central distribution (Connolly et al., 1990) that is nonetheless statistically alike across the scalp (van den Brink et al., 2001). Although effects of lip-read context were largest in the 300–350 ms epoch, the same pattern of AV integration (i.e., more negative ERP amplitudes for AV–V than for A) started to appear at the P2 peak for a number of mid-central electrodes (e.g., C3, Cz, C4). ...
... In the auditory domain, the N400 is larger (more negative) when words are preceded by unrelated words than when words are preceded by related words (i.e., a semantic priming effect, see Holcomb & Neville, 1990). The N400 has been argued to reflect cognitive/linguistic processing in response to auditory input (Connolly et al., 1992), and is proposed to be functionally distinct from the earlier N200 (Connolly, Stewart, & Phillips, 1990; Connolly et al., 1992; van den Brink & Hagoort, 2004; van den Brink et al., 2001). Looking at the literature on lexical context effects, and the literature on the effect of lip-read context, we see multiple clear parallels: In both cases, the context effect strongly influences how listeners interpret ambiguous or unclear speech. ...
... Given that the lexicality effect is thus similar to the effect observed with naturally timed stimuli, it appears that the N200 effect can survive the $800 ms delay, producing an N200 effect superimposed on the obligatory P2. Despite the clear central topography (as often found for the P2, see e.g.,), effects of lexicality were statistically alike across the scalp, providing additional support for the hypothesis that the effect is an N200-effect with a central distribution (Connolly et al., 1990) that is nonetheless statistically alike across the scalp (van den Brink et al., 2001). Although effects of lip-read context were largest in the 300–350 ms epoch, the same pattern of AV integration (i.e., more negative ERP amplitudes for AV–V than for A) started to appear at the P2 peak for a number of mid-central electrodes (e.g., C3, Cz, C4). ...
... In the auditory domain, the N400 is larger (more negative) when words are preceded by unrelated words than when words are preceded by related words (i.e., a semantic priming effect, see Holcomb & Neville, 1990). The N400 has been argued to reflect cognitive/linguistic processing in response to auditory input (Connolly et al., 1992), and is proposed to be functionally distinct from the earlier N200 (Connolly et al., 1992; Connolly, Stewart, & Phillips, 1990 ; van den Brink et al., 2001; van den Brink & Hagoort, 2004). Looking at the literature on lexical context effects, and the literature on the effect of lip-read context, we see multiple clear parallels: In both cases, the context effect strongly influences how listeners interpret ambiguous or unclear speech. ...
... Given that the lexicality effect is thus similar to the effect observed with naturally timed stimuli, it appears that the N200 effect can survive the ~800 ms delay, producing an N200 effect superimposed on the obligatory P2. Despite the clear central topography (as often found for the P2, see e.g.,), effects of lexicality were statistically alike across the scalp, providing additional support for the hypothesis that the effect is an N200-effect with a central distribution (Connolly et al., 1990 ) that is nonetheless statistically alike across the scalp (van den Brink et al., 2001). Although effects of lip-read context were largest in the 300-350 ms epoch, the same pattern of AV integration (i.e., more negative ERP amplitudes for AV – V than for A) started to appear at the P2 peak for a number of mid-central electrodes (e.g., C3, Cz, C4). ...
... Cet effet a été étudié aux niveaux comportemental (réduction du temps de réaction, TR) et électrophysiologique : l'amplitude de la N400 est inversement proportionnelle au degré de relation sémantique entre amorce et cible [55]. Lorsque le contexte sémantique consiste en des phrases complètes, la distribution spatiale, l'évolution temporelle et les variations de l'effet N400 sont proches de celles observées pour les paires de mots [28,47,85,156]. ...
... Les conditions expérimentales modulent aussi la composante N400. Ainsi, le type d'épreuve cognitive proposée et le degré d'implication des participants (attention, motivation, réponse motrice) influencent l'amplitude de la N400, qui est typiquement plus grande pour les tâches les plus actives (prononciation, décision lexicale, catégorisation, jugement de plausibilité) comparées aux tâches plus passives (tâche distractive, écoute passive, lecture silencieuse) [12,28,140]. Notons également que lors de présentations par paires de mots, le délai de présentation entre amorce et cible (SOA pour stimulus onset asynchrony) favorise l'implication de mécanismes cognitifs automatiques et peu conscients (SOA court) ou plus contrôlés et intentionnels (SOA long). ...
L’un des buts actuels dans la recherche sur les étapes initiales de la schizophrénie consiste à tenter de proposer des marqueurs de vulnérabilité précoces, stables et objectifs. Dans cette revue, nous décrivons tout d’abord brièvement cette notion de marqueurs précoces, ou endophénotypes, notamment dans ses aspects de stabilité, de spécificité et d’héritabilité. Parmi d’autres domaines de recherche, le statut des potentiels évoqués cognitifs ou ERPs comme possibles endophénotypes a pu être suggéré. La composante N400 est un potentiel classiquement impliqué lors du traitement sémantique de stimuli linguistiques, comme le prouve une littérature toujours plus croissante. Nous proposons ici d’en rappeler les descriptions les plus typiques, les modulations et les modèles théoriques. Puis nous reprendrons les résultats concernant les altérations de la composante N400 dans la schizophrénie, à un niveau de diffusion automatique ou à un niveau plus contrôlé de prise en compte du contexte sémantique. Cependant, les mécanismes sous-jacents restent mal connus, ou controversés, tout comme les questions des liens, chez les patients schizophrènes entre N400 et symptomatologie ou évolution. Enfin, les notions d’héritabilité (uniquement abordée en termes de schizotypie) et de spécificité schizophrénique restent insuffisamment explorées. Si les études sur la composante N400 permettent de mieux comprendre les perturbations linguistiques rencontrées par les patients souffrant de schizophrénie, il est difficile d’évoquer pour la N400 un statut d’endophénotype schizophrénique robuste, du moins en l’état actuel de nos connaissances.
... On the other hand, as the number of plausible sentential completions determines how efficiently incoming words are integrated, compatible words presented to the right hemisphere will show greater facilitation in strongly constraining contexts than in weakly constraining contexts. Sentence-level semantic bias also influences the N400 response to spoken words, with expected words eliciting smaller N400 amplitudes than unexpected words (Connolly et al., 1990; Herning et al., 1987). This auditory N400 effect is sensitive to the perceptual demands of the listening environment. ...
... To investigate potential hemispheric asymmetries in sentence-level semantic processing in the auditory modality, ear of presentation of the sentence contexts (right versus left) was counterbalanced across participants. Based on previous findings, it was expected that target congruency would modulate N400 amplitudes, with larger N400 responses anticipated for incongruent targets than congruent targets ( Hillyard, 1980, 1984; Connolly et al., 1990). The N400 component was also expected to be influenced by the strength of the contextual bias, as determined by the cloze probability of congruent targets (Kutas and Hillyard, 1984). ...
... We thus claim that the EN should not be interpreted as the early onset of an N400 as suggested by Neville (1990, 1991) and Diaz and Swaab (in press). Yet, the latency of the EN component is also substantially shorter than previously reported N200/PMN deflections for deviating segmental phonological input (Connolly, Steward, & Phillips, 1990;Connolly, Phillips, Stewart, & Brake, 1992;Connolly & Philipps, 1994;Hagoort & Brown, 2000a;Van den Brink, Brown, & Hagoort, 2001;D'Arcy, Connolly, Service, Hawco, & Houlihan, 2004). In brief, these studies show that words which either match or do not match a sentential context start to be differentiated before their semantic content has been fully encountered. ...
... In brief, these studies show that words which either match or do not match a sentential context start to be differentiated before their semantic content has been fully encountered. Connolly et al. (1990) were the first to report this ERP response which peaks at 250 ms and displays a fronto-central scalp distribution. In particular, the ERP was observed for semantic mismatches which were phonological word onset errors at the same time. ...
The current study on German investigates Event-Related brain Potentials (ERPs) for the perception of sentences with intonations which are infrequent (i.e. vocatives) or inadequate in daily conversation. These ERPs are compared to the processing correlates for sentences in which the syntax-to-prosody relations are congruent and used frequently during communication. Results show that perceiving an adequate but infrequent prosodic structure does not result in the same brain responses as encountering an inadequate prosodic pattern. While an early negative-going ERP followed by an N400 were observed for both the infrequent and the inadequate syntax-to-pros-ody association, only the inadequate intonation also elicits a P600.
... The semantic context can classically consist of either pairs of words (semantic priming) or complete sentences. While most N400 studies have been conducted in the visual modality, a few experiments have used connected speech (i.e., complete sentences auditorily presented) (Connolly et al., 1990;Connolly and Phillips, 1994). Interestingly, the N400 effect (i.e., the difference wave between the N400s obtained from semantically incongruous and congruous words) typically develops earlier in natural speech Neville, 1990, 1991;Besson et al., 1997;Hagoort and Brown, 2000). ...
... The ERPs obtained from correct responses were analyzed using the Brain Analyzer software (v6.1) by measuring the mean amplitude and its latency in two selected time windows: N400 interval at 130-450 ms and LPC interval at 450-800 ms. These two selected temporal windows were based upon visual inspection of the difference waveforms and data from literature (Connolly et al., 1990;Neville, 1990, 1991;Besson et al., 1997;Hagoort and Brown, 2000;Kutas and Federmeier, 2011). As recently measured in N400 studies among psychiatric patients (Kiang et al., 2007(Kiang et al., , 2010Iakimova et al., 2009;Ryu et al., 2012), mean amplitude refers here to the mean peak voltage observed in the selected temporal windows. ...
... However, the meaning or the semantic feature is vital to understand the competing speech streams. Recent studies also revealed that processing of semantic or linguistic features could also be modulated by attention, for the features in the attended stream could also better understood than the unattended stream (Connolly et al. 1990;Heil et al. 2004;Broderick et al. 2018;Har-shai Yahav and Zion Golumbic 2021;Dai et al. 2022). It is still unclear whether the speaker-listener coupling was triggered by the same meaning in the speech or caused by extra-linguistic attention modulation (Hasson et al. 2012;Schoot et al. 2016;Stolk et al. 2016;Hartley and Poeppel 2020;Pérez and Davis 2023). ...
When we pay attention to someone, do we focus only on the sound they make, the word they use, or do we form a mental space shared with the speaker we want to pay attention to? Some would argue that the human language is no other than a simple signal, but others claim that human beings understand each other because they form a shared mental ground between the speaker and the listener. Our study aimed to explore the neural mechanisms of speech-selective attention by investigating the electroencephalogram-based neural coupling between the speaker and the listener in a cocktail party paradigm. The temporal response function method was employed to reveal how the listener was coupled to the speaker at the neural level. The results showed that the neural coupling between the listener and the attended speaker peaked 5 s before speech onset at the delta band over the left frontal region, and was correlated with speech comprehension performance. In contrast, the attentional processing of speech acoustics and semantics occurred primarily at a later stage after speech onset and was not significantly correlated with comprehension performance. These findings suggest a predictive mechanism to achieve speaker-listener neural coupling for successful speech comprehension.
... Speech stimuli are more often used in this situation due to concern about the extent of visual processing in DOC patients, even if they are tested when their eyes are open; and there is also concern that they may choose to close their eyes at times that interfere with meaningful data collection efforts. Recording the N400 to speech stimuli was accomplished relatively soon after initial discovery of the N400 (Kutas and Hillyard, 1980) using sentence stimuli (Connolly et al., 1990) and word-word priming designs (Holcomb and Neville, 1990) and has proven to be as reliably recorded as its visual counterpart. However, the stronger semantic context effects found with sentences was notable both in terms of the amplitude of the N400 for grouped data and the identification of the response at the individual participant level; an effect replicated over the intervening years with the only exception being the strong context produced by pictureword paradigms where the N400 to words unrelated to the preceding picture prime were seen at group and individual levels (e.g., Connolly et al., 1995). ...
The paramount importance of research design and research methodologies within the shared space of neurology, clinical neurophysiology, and cognitive neuroscience serves as the theme around which a range of topics is presented. After a tour of historical figures of human electrophysiology and electroencephalography (EEG), the discussion turns to event-related potential (ERP). Emphasizing the lengthy history of these manifestations of cognition, the chapter outlines the extensive research literature that has demonstrated the sensitivity of ERPs to a range cognitive functions, including attention, language processing, and memory. There follows a series of examples of ERP applications in the clinical domain, including disorders of consciousness, stroke, autism, coma, and concussion. These examples not merely demonstrate the general utility of these electrophysiological responses but stress that their independence from behavioral responses provides a much needed clinical method to assess individuals who are literally or virtually impossible to assess using traditional behaviorally based clinical tools. The chapter concludes with the suggestion that is time that the incontrovertible utility of ERPs be employed more fully within clinical contexts to assist the clinical community in providing objective assessments of a range of neurologic conditions.
... It appears that even though literal constituent meanings typically do not contribute to the understanding of the idiomatic meaning, their processing is still automatically carried out. This conclusion is comparable to the notion that semantic processing cannot be switched off, as for example Connolly, Stewart, and Phillips (1990) showed for spoken language processing. We speculate that this is similar to a Stroop-like effect (Stroop, 1935) where the literal meaning of the word is not informative, but is nevertheless activated (cf. ...
How the language processing system handles formulaic language such as idioms is a matter of debate. We investigated the activation of constituent meanings by means of predictive processing in an eye-tracking experiment and in two ERP experiments (auditory and visual). In the eye-tracking experiment, German-speaking participants listened to idioms in which the final word was excised ( Hannes let the cat out of the . . .). Well before the offset of these idiom fragments, participants fixated on the correct idiom completion ( bag) more often than on unrelated distractors ( stomach). Moreover, there was an early fixation bias towards semantic associates ( basket) of the correct completion, which ended shortly after the offset of the fragment. In the ERP experiments, sentences (spoken or written) either contained complete idioms, or the final word of the idiom was replaced with a semantic associate or with an unrelated word. Across both modalities, ERPs reflected facilitated processing of correct completions across several regions of interest (ROIs) and time windows. Facilitation of semantic associates was only reliably evident in early components for auditory idiom processing. The ERP findings for spoken idioms compliment the eye-tracking data by pointing to early decompositional processing of idioms. It seems that in spoken idiom processing, holistic representations do not solely determine lexical processing.
... The PMN (also referred to as N200, N250 or phonological mapping negativity) was first reported by [1,7,8]. They presented spoken sentences to participants while recording the electroencephalography (EEG) signal. ...
... For instance, Connolly and Phillips [34] report an early negativity (between 150-300 ms) in response to sen- tence final words phonologically inconsistent with the semantic context thus far. This compo- nent typically occurs with a fronto-central distribution [37,38] (but cf. [34]). ...
Sign languages use the horizontal plane to refer to discourse referents introduced at referential locations. However, the question remains whether the assignment of discourse referents follows a particular default pattern as recently proposed such that two new discourse referents are respectively assigned to the right (ipsilateral) and left (contralateral) side of (right handed) signers. The present event-related potential study on German Sign Language investigates the hypothesis that signers assign distinct and contrastive referential locations to discourse referents even in the absence of overt localization. By using a semantic mismatch-design, we constructed sentence sets where the second sentence was either consistent or inconsistent with the used pronoun. Semantic mismatch conditions evoked an N400, whereas a contralateral index sign engendered a Phonological Mismatch Negativity. The current study provides supporting evidence that signers are sensitive to the mismatch and make use of a default pattern to assign distinct and contrastive referential locations to discourse referents.
... Malins negative deflection believed to be related to pre-lexical processing. This component has been shown to be modulated by word-initial phoneme mismatches between expected and observed words (e.g., Connolly & Phillips, 1994;Connolly, Stewart, & Phillips, 1990). The results showed that the two conditions of interest (tonal mismatch, e.g., hua1 'flower' -hua4 'painting'; and rime mismatch, e.g., hua1 'flower' -hui1 'gray') showed similar timings of PMN responses. ...
... The N200 effect is thought to functionally reflect the perceptual comparison of a target word with an expected word form (van den Brink, Brown, & Hagoort, 2002). This is based on findings showing that participants demonstrate larger N200 responses when they hear a target word that has an initial sound that mismatches the initial sound of the expected word (e.g., participants hear target word 'queen' when the expected word is 'eyes') given the sentence context (e.g., 'Phil put some drops in his ___'), compared to when they hear a target word (e.g., 'icicles') that has an initial sound that matches the initial sound of the expected word (Connolly, Stewart, & Phillips, 1990;Connolly, Phillips, Stewart, & Brake, 1992;Connolly & Phillips, 1994;Hagoort & Brown, 2000;van den Brink et al., 2002). ...
Much of the current literature on selective social learning focuses on the external factors that trigger children’s selectivity. In this chapter, we review behavioral, eye-tracking, and electrophysiological evidence for how children selectively learn words—what the internal processes are that enable them to block learning when they doubt the epistemic quality of the source. We propose that young children engage a semantic-blocking mechanism that allows for the initial encoding of
words but disrupts the creation of lexico-semantic representations. We offer a framework that can be extended to other selective word learning contexts to investigate whether a similar semantic-gating mechanism is engaged in different contexts. Lastly, we propose several implications for the evidence we review on the standard
model of word learning.
... ERP components appear sensitive to differing aspects of spoken word recognition: the N400, a component traditionally associated with semantic processing of spoken or written words (Bentin et al., 1993;Kutas and Hillyard, 1980), and an earlier occurring negativity, the Phonological Mapping Negativity (PMN), 1 which has been previously linked to phonological processes (Connolly et al., 1992(Connolly et al., , 1990. Though the N400 is typically associated with semantic analysis, there is a sizable literature showing that the N400 is also modulated by phonological factors (Dumay et al., 2001;Praamstra and Stegeman, 1993;Praamstra et al., 1994;Radeau et al., 1998;Rugg, 1984a,b). ...
... On a passive reading task groups may be reading the sentences at different depths. Connolly, Stewart, and Phillips (1990) and Bentin, Kutas, and Hillyard (1993) have suggested that N400 on a passive reading or phonological task is analogous to the N400 elicited on tasks requiring semantic processing, suggesting that N400 is an obligatory response when attention is directed to any degree to language stimuli. Although N400 has been evoked in passive reading tasks in normal subjects, it may be increasingly elicited during active reading. ...
Thought disorder in schizophrenia may involve abnormal semantic activation ol faulty working memory maintenance. Event-related potentials (ERPs) were recorded while sentences reading "THE NOUN WAS ADJECTIVE/VERB" were presented to 34 schizophrenic and 34 control subjects. Some nouns were homographs with dominant and subordinate meanings. Their sentence ending presented information crucial for interpretation (e.g., The ba,lk was [closed steep]). Greatest; N400 activity to subordinate homograph-meaning sentence endings in schizophrenia would reflect a semantic bias to strong associates. N400 to all endings would reflect faulty verbal working memory maintenance. Schizophrenic subjects showed N400 activity to all endings, suggesting problems in contextual maintenance independent of content, hut slightly greater N400 activity to subordinate endings that cor related with the severity of psychosis. Future research,should help determine whether a semantic activation bias in schizophrenia toward strong associates is reflected in ERP activity or whether this effect is overshadowed by faulty verbal working memory maintenance of context.
... Each sentence evoked a large onset response and then a series of smaller deflections. Other studies of evoked potentials to sentences have not concerned themselves with these smaller deflections, usually averaging together responses to different sentences or looking at the responses to specific events within a sentence (Connolly et al., 1990(Connolly et al., , 1992D"Arcy et al., 2004). ...
... These findings support the N400 as a potential to detect higher cognitive functions that require directed attention and confirm previous studies showing strong attenuation or extinction of the N400 effect when attention is not directed toward the stimuli (McCarthy and Nobre, 1993;Chwilla et al., 1995). However, other studies did not find that N400 is modulated by the depth of processing level (Connolly, 1990;Relander et al., 2009). In their review, Deacon and Shelley-Tremblay (2000) concluded that the N400 does not necessarily require attention but only occurs if the processing of the stimuli is not actively inhibited. ...
Event-related potentials (ERPs) have been proven to be a useful tool to complement clinical assessment and to detect residual cognitive functions in patients with disorders of consciousness. These ERPs are often recorded using passive or unspecific instructions. Patient data obtained this way are then compared to data from healthy participants, which are usually recorded using active instructions. The present study investigates the effect of attentive modulations and particularly the effect of active vs. passive instruction on the ERP components mismatch negativity (MMN) and N400. A sample of 18 healthy participants listened to three auditory paradigms: an oddball, a word priming, and a sentence paradigm. Each paradigm was presented three times with different instructions: ignoring auditory stimuli, passive listening, and focused attention on the auditory stimuli. After each task, the participants indicated their subjective effort. The N400 decreased from the focused task to the passive task, and was extinct in the ignore task. The MMN exhibited higher amplitudes in the focused and passive task compared to the ignore task. The data indicate an effect of attention on the supratemporal component of the MMN. Subjective effort was equally high in the passive and focused tasks but reduced in the ignore task. We conclude that passive listening during EEG recording is stressful and attenuates ERPs, which renders the interpretation of the results obtained in such conditions difficult.
... While most studies of context effects on word expectancy have been conducted in the visual modality, some experiments have used connected speech (e.g., [59]). Typically, and as noted above, results showed an early onset of the N400 effect [12,14] that could develop before the word's recognition point (before words can be recognized as unique items; e.g., [89,92,96,101]. The functional significance of this early effect and whether it reflects an early onset of the N400 effect [96] or some pre-N400 effects [31] (possibly linked with the occurrence of N200 components [91] or of a Phonological Mismatch Negativity (PMN, [14]) still remains an open question. ...
Context:
Using natural connected speech, the aim of the present study was to examine the semantic congruity effect (i.e. the difference between semantically incongruous and congruous words) in sentence contexts that generate high or moderate final word expectancies.
Methods:
We used sentences with two levels of word expectancy in the auditory modality: familiar proverbs (that generate high final word expectancy), and unfamiliar sentences (that generate only moderate final word expectancy).
Results:
Results revealed an early congruity effect (0-200 ms) that developed across all scalp sites for familiar proverbs but not for unfamiliar sentences. By contrast, typical centro-parietal N400 and Late Positivity Component congruity effects developed later (200-500 ms and 600-900 ms ranges) for both familiar proverbs and unfamiliar sentences.
Discussion:
We argue that the early congruity effect for proverbs comprises both a Phonological Mismatch Negativity, reflecting the processing of the acoustic/phonological mismatch between the expected (congruous) and unexpected (incongruous) sentence completions and a typical N400 semantic congruity effect with an unusual short latency because final words can be predicted from the unusually high contextual constraints of familiar proverbs. These results are considered in the light of current views of anticipation and prediction processes in sentence contexts.
... Another possibility is that when rating the rhythmic variability of speech samples, participants are attending to the semantic message in their native language and therefore spend less attention to the rhythmic variability. Attention to semantic information occurs automatically even when it is not required (Connolly, Stewart, & Phillips, 1990). The inability to attend to the message of the unfamiliar language may allow listeners to spend more of their attention on the rhythmic characteristics of the speech signal. ...
The music of expert musicians reflects the speech rhythm of their native language. Here, we
examine this effect in amateur and novice musicians. English- and French-speaking participants
were both instructed to produce simple “English” and “French” tunes using only two keys on a
keyboard. All participants later rated the rhythmic variability of English and French speech samples.
The rhythmic variability of the “English” and “French” tunes that were produced reflected the
perceived rhythmic variability in English and French speech samples. Yet, the pattern was different
for English and French participants and did not correspond to the actual measured speech rhythm
variability of the speech samples. Surprise recognition tests two weeks later confirmed that the
music–speech relationship remained over time. The results show that the relationship between
music and speech rhythm is more widespread than previously thought and that musical rhythm
production by amateurs and novices is concordant with their rhythmic expectations in the
perception of speech.
... Attention to lexical/semantic speech information is automatic and occurs even when it is not required of the task at hand (Connolly, Stewart, & Phillips, 1990;Parmentier, 2008). However, lexical/semantic processing also competes for attentional resources that would otherwise be available for indexical change detection (Vitevitch, 2003). ...
We first replicated the language-familiarity effect for voice discrimination and found
better voice discrimination in familiar languages. However, when listeners were not cued to listen
for changes, both English and Spanish speakers exhibited greater change deafness in their familiar
language. Results suggest that lexical/semantic attention in a familiar language and increased indexical
processing in an unfamiliar language can produce greater change deafness in familiar
... Based on these studies, it is difficult to differentiate phonological from lexical processes since ERP differences between words and pseudo-words are observed only in late latencies. However, a negativity arising earlier than N400 (around 300 ms) has been described as being specifically linked to phonological processes and called the phonological mapping negativity (PMN) (Connolly and Phillips, 1994; Connolly, Stewart, & Phillips, 1990; Thierry, Doyon, & Demonet, 1998). This component recorded in response to words as well as pseudo-words was not modulated by the lexical factor, contrary to the N400, but appeared to be sensitive to phonological expectation rather than lexical expectation. ...
Recent theory of physiology of language suggests a dual stream dorsal/ventral organization of speech perception. Using intra-cerebral Event-related potentials (ERPs) during pre-surgical assessment of twelve drug-resistant epileptic patients, we aimed to single out electrophysiological patterns during both lexical-semantic and phonological monitoring tasks involving ventral and dorsal regions respectively. Phonological information processing predominantly occurred in the left supra-marginal gyrus (dorsal stream) and lexico-semantic information occurred in anterior/middle temporal and fusiform gyri (ventral stream). Similar latencies were identified in response to phonological and lexico-semantic tasks, suggesting parallel processing. Typical ERP components were strongly left lateralized since no evoked responses were recorded in homologous right structures. Finally, ERP patterns suggested the inferior frontal gyrus as the likely final common pathway of both dorsal and ventral streams. These results brought out detailed evidence of the spatial-temporal information processing in the dual pathways involved in speech perception.
... Kutas and Hillyard (1984) noticed that the more a word was unexpected, the more the N400 amplitude increased, suggesting that the N400 amplitude is inversely related to the subject's semantic expectancy. It is possible to observe the occurrence of the N400 when subjects have to direct their attention away from the semantic aspect of the stimulus (Connolly, Stewart, & Phillips, 1990;Kutas & Hillyard, 1989;Perrin & Garcia-Larrea, 2003), suggesting that N400 reflects automatic processes. Medial and lateral temporal cortex, as well as the left frontal and parietal cortex, seems to participate to its generation (Hagoort, Brown, & Swaab, 1996;McCarthy, Nobre, Bentin, & Spencer, 1995;Smith, Stapleton, & Halgren, 1986). ...
Event-related potentials (ERPs) method allows exploring the extent to which the human brain can process auditory information from external world during sleep. ERPs studies demonstrate that information processing is still efficient during sleep, and sometimes in a manner very similar to that observed during wakefulness. They show that the sleeping subject may detect stimulus deviance, as well as the presence of her/his own first name in sequences of equiprobable first names. The hypothesis that some semantic analysis of auditory stimuli remains possible during sleep is confirmed by the persistence of differential ERPs to related or unrelated words during sleep. Furthermore, it seems that linguistic absurdity is accepted in a different manner during paradoxical sleep, since pseudowords (without meaning) yield a similar response to that of related words, while they elicit a more similar response to unrelated words during waking and sleep stage 2.
When we pay attention to someone, do we focus only on the sound they make, the word they use, or do we form a mental space shared with the speaker we want to pay attention to? Some would argue that the human language is no other than a simple signal, but others claim that human beings understand each other not only by relying on the words that have been said but also formed a shared ground in the specific conversation. This debate was raised early, but the conclusion remains vague. Our study aimed to investigate how attention modulates the neural coupling between the speaker and the listener in a cocktail party paradigm. The temporal response function (TRF) method was employed to reveal how the listener was coupled to the speaker at the neural level. The results showed that the neural coupling between the listener and the attended speaker peaked 5 seconds before speech onset at the delta band over the left frontal region, and was correlated with speech comprehension performance. In contrast, the attentional processing of speech acoustics and semantics occurred primarily at a later stage after speech onset and was not significantly correlated with comprehension performance. These findings suggest that our human brain might have adopted a predictive mechanism to achieve speaker-listener neural coupling for successful speech comprehension.
Three key points
Listener’s EEG signals coupled to the speaker’s 5 s before the speech onset, which revealed a “beyond the stimulus” attentional modulation.
Speaker-listener attentional coupling is correlated to the listener’s comprehension performance, but the speech-listener’s coupling didn’t.
The implementation of temporal response function methods and the neural language methods yielded novel perspectives to the analysis of the inter-brain studies.
How quickly do children and adults interpret scalar lexical items in speech processing? The current study examined interpretation of the scalar terms some vs. all in contexts where either the stronger (some = not all) or the weaker interpretation was permissible (some allows all). Children and adults showed increased negative deflections in brain activity following the word some in some-infelicitous versus some-felicitous contexts. This effect was found as early as 100 ms across central electrode sites (in children), and 300–500 ms across left frontal, fronto-central, and centro-parietal electrode sites (in children and adults). These results strongly suggest that young children (aged between 3 and 4 years) as well as adults quickly have access to the contextually appropriate interpretation of scalar terms.
This PhD thesis examines neural mechanisms of linguistic mismatch in adults and children based on dialect familiarity and the impact of speaking Swiss German (CHG) dialect on early reading and spelling acquisition in Standard German (StG).
Study 1 investigated familiarity effects for dialect-based phonological processing in adults and employed an EEG-based MMN paradigm with pseudowords. MMN ERP measures revealed that a higher degree of familiarity with dialect-specific allophonic variants impacted neural processing efficiency to the extent that less familiar variants demanded more wide-spread activation processes.
Study 2 investigated how familiarity with dialect-specific pronunciation and lexicality of spoken words impacted phonological and semantic processing at the neural level in CHG and StG native children, shortly before literacy acquisition in school. Results revealed a semantic mismatch (N400-LPC) effect for neural processing of unfamiliar words, but not for pronunciation variants (only LPC).
Study 3 investigated how speaking CHG dialect (together with other variables) impacted reading and spelling learning after one year of formal instruction in school. Although no differences in Grade 1 reading and spelling were found between groups of children with different CHG exposure, SEM revealed that high CHG exposure was negatively associated with Grade 1 spelling and reading, when statistically controlling for early literacy-related-skills.
Arithmetic problems share many surface‐level features with typical sentences. They assert information about the world, and readers can evaluate this information for sensibility by consulting their memories as the statement unfolds. When people encounter the solution to the problem 3 × 4, the brain elicits a robust ERP effect as a function of answer expectancy (12 being the expected completion; 15 being unexpected). Initially, this was labeled an N400 effect, implying that semantic memory had been accessed. Subsequent work suggested instead that the effect was driven by a target P300 to the correct solutions. The current study manipulates operand format to differentially promote access to language‐based semantic representations of arithmetic. Operands were presented either as spoken number words or as sequential Arabic numerals. The critical solution was always an Arabic numeral. In Experiment 1, the correctness of solutions preceded by spoken operands modulated N400 amplitude, whereas solutions preceded by Arabic numerals elicited a P300 for correct problems. In Experiment 2, using only spoken operands, the delay between the second operand and the Arabic numeral solution was manipulated to determine if additional processing time would result in a P300. With a longer delay, an earlier N400 and no distinct P300 were observed. In brief, highly familiar digit operands promoted target detection, whereas spoken numbers promoted semantic level processing—even when solution format itself was held constant. This provides evidence that the brain can process arithmetic fact information at different levels of representational meaningfulness as a function of symbolic format.
Objective: Event-related brain potentials (ERPs) were used to assess language function after stroke and demonstrate that it is possible to adapt neuropsychological tests to evaluate neurocognitive function using ERPs. Prior ERP assessment work has focused on language in both healthy individuals and case studies of aphasic neurotrauma patients. The objective of the current study was to evaluate left-hemisphere stroke patients who had varying degrees of receptive language impairment. It was hypothesized that ERPs would assess receptive language function accurately and correlate highly with the neuropsychological data. Methods: Data were collected from 10 left-hemisphere stroke patients; all were undergoing rehabilitation at the time of testing. Each patient received a battery of neuropsychological tests including the Peabody Picture Vocabulary Test-Revised (PPVT-R; Minnesota: American Guidance Service, 1981). ERPs were recorded during a computerized PPVT-R, in which pictures are presented followed by digitized spoken words that are either congruent or incongruent with the pictures. Results and conclusion: Incongruent spoken words within an individual's vocabulary level elicited well-known ERP components. One of the components (the N400) could be utilized as a marker of intact semantic processing. The ERP results were subsequently quantified and N400 derivative scores correlated highly with the neuropsychological findings. The results provided a clear demonstration of the efficacy of ERP-based assessment in a neurological patient group. Significance: Language function in stroke patients can be evaluated, independent of behavior, using electrophysiological measures that correlate highly with traditional neuropsychological test scores.
In the present study, we explored the influence of emotional words on the semantic integration of their following neutral nouns during sentence comprehension. We manipulated the emotionality of verbs and the semantic congruity of their following (neutral) object nouns in sentences. Event-related potentials were recorded to the verbs, which were either negative or neutral, and to the object nouns, which were either semantically congruent or incongruent relative to the preceding contexts. We found an N400 and a P600 effect in response to the semantic congruity of the nouns when they followed the neutral verbs. However, the P600 (but not the N400) semantic congruity effect may have been attenuated when the nouns followed the negative verbs. Meanwhile, the negative verbs elicited a larger P2 and N400 than did the neutral verbs. The results indicate that the attention captured by emotional words impaired reanalysis of the following incongruent information, demonstrating a dynamic influence of emotional words on the semantic processing of following information during sentence comprehension.
This chapter describes how event-related potential (ERP) components have been used to answer questions about attentional processing. In particular, it discusses how attention modulates the flow of sensory processing in relatively simple tasks and how it operates at postperceptual levels in more complex dual-task paradigms. The chapter focuses primarily on the variety of attention called selective attention, the processes by which the brain selects some sources of inputs for enhanced processing. The first section describes how ERPs first became used in the study of attention, highlighting the unique ability of ERPs to answer questions that had puzzled attention researchers for decades. The second section describes major ERP attention studies in the auditory and visual modalities, respectively. The chapter concludes with a discussion of the operation of attention in postperceptual systems, such as working memory encoding and response selection.
Much research has been done on analyzing relationship between ERP (Event Related Potential) and linguistic, but did little job on Chinese. Here, participants were equally divided into two groups. Group A was in a cold environment, and group B was in tired condition. Recording ERP signal when participants were repeated Chinese sentence. From the work, we found that the amplitude of ERP components (N200, N400, P600) had a big change when participants were repeated different Chinese sentence of feeling. Using MATLAB to set a specific digital filter to remove the artifacts, then analyzing the effective EEG signals by EEGLAB.
In this study we investigated the availability of non-target language semantic features in bilingual speech processing. We recorded EEG from Dutch-English bilinguals who listened to spoken sentences in their L2 (English) or L1 (Dutch). In Experiments 1 and 3 the sentences contained an interlingual homophone. The sentence context was either biased towards the target language meaning of the homophone (target biased), the non-target language meaning (non-target biased), or neither meaning of the homophone (fully incongruent). These conditions were each compared to a semantically congruent control condition. In L2 sentences we observed an N400 in the non-target biased condition that had an earlier offset than the N400 to fully incongruent homophones. In the target biased condition, a negativity emerged that was later than the N400 to fully incongruent homophones. In L1 contexts, neither target biased nor non-target biased homophones yielded significant N400 effects (compared to the control condition). In Experiments 2 and 4 the sentences contained a language switch to a non-target language word that could be semantically congruent or incongruent. Semantically incongruent words (switched, and non-switched) elicited an N400 effect. The N400 to semantically congruent language-switched words had an earlier offset than the N400 to incongruent words. Both congruent and incongruent language switches elicited an LPC (Late Positive Component). These findings show that bilinguals activate both meanings of interlingual homophones irrespective of their contextual fit. In L2 contexts, the target-language meaning of the homophone has a head start over the non-target language meaning. The target-language head start is also evident for language switches from both L2-to-L1 and L1-to-L2.
Although nearly one hundred scientific papers purporting to deal with evoked potential correlates of language processes have been published over the past three decades, little specific information is currently known concerning the electrophysiological correlates of language. This is especially true in the areas of syntax, semantics, pragmatics, and sentence processing. While phonology has received more systematic attention than the other divisions of language, even there large tracts remain virtually untouched. Such poor progress in spite of the relatively large number of Event Related Potential (ERP) studies can be attributed to (1) the general lack of systematic research except in only a few studies, (2) a simplified view of language coupled with the absence of an overall linguistically oriented approach to guide stimulus selection and control as well as the experimental design, (3) the more prevalent interest in ERP demonstrations of laterality—a situation that may have distracted investigators from identifying specific ERP differences to language related stimuli, (4) the use of few electrode sites, with the majority of studies using two sites, (5)the use of inappropriate analysis procedures or the absence of objective statistical techniques, (6) conclusions that over-generalize from the absence of effects or which ignore effects not in keeping with the hypotheses being advanced.
In the present study, brain responses were recorded during the presentation of naturally spoken sentences. In two separate experiments, the same set of stimulus sentences was presented with subjects either being asked to pay close attention in order to answer content questions following the run ('memory instruction' - MI) or to press one of two buttons to indicate a normal or unusual ending to a sentence ('response instruction' - RI). Brain event-related potentials were not averaged across the exact same acoustic information but across 49 different words spoken in natural, uninterrupted sentences. There was no attempt to standardize the acoustic features of stimulus words by electronic means. Rather than splicing stimulus words (and trigger pulse needed for computer averaging) onto sentence stems, consonant-vowel-consonant (CVC) monosyllablic words were selected with voiceless stop consonants in the word initial position. This not only avoids acoustic overlap with the preceding word of the sentence but also allows the point of stimulus word onset to be precisely located. In the MI group, brain responses to the semantically anomalous endings were distinguished by the presence of a late negative wave (N300) followed by a sustained positive wave (P650). Responses to anomally in the RI group data was not consistently differentiated from normal in the 650-1000 ms range. Within conditions, the MI and RI waveforms were differentiated by the presence of an augmented positive-going slow wave in the RI condition which may reflect an augmented CNV release. The feasibility of averaging brain electrical responses across non-isolated words which differed acoustically but were of similar phonemic structure was demonstrated. This paradigm provides a means of studying speech-activated neurolinguistic processes in the stream of speech and may make complex spoken language contexts available for event-related potential investigations of brain and language functions.
A photic probe paradigm was developed in order to assess the cerebral excitation patterns corresponding to acoustic and linguistic operations independently of specific signal parameters. Average Evoked Potentials (AEPs) to a photic stimulus were recorded at the anterior temporal and posterior regions of both hemispheres of subjects engaged in acoustic, phonetic, and semantic processing of verbal material. Patterns of attenuation and enhancement of the probe AEP amplitudes were observed over the left and right hemispheres, respectively, with the magnitude of change varying reliably as a function of processing task. The implications of these findings for models of cerebral specialization for language are discussed.