Article

Word-specific repetition effects revealed by MEG and the implications for lexical access

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This magnetoencephalography (MEG) study investigated the early stages of lexical access in reading, with the goal of establishing when initial contact with lexical information takes place. We identified two candidate evoked responses that could reflect this processing stage: the occipitotemporal N170/M170 and the frontocentral P2. Using a repetition priming paradigm in which long and variable lags were used to reduce the predictability of each repetition, we found that (i) repetition of words, but not pseudowords, evoked a differential bilateral frontal response in the 150-250ms window, (ii) a differential repetition N400m effect was observed between words and pseudowords. We argue that this frontal response, an MEG correlate of the P2 identified in ERP studies, reflects early access to long-term memory representations, which we tentatively characterize as being modality-specific.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For instance, a very recent study utilized the RS paradigm and observed increased N1 onsets but decreased N1 offsets for repeated words (Maurer et al., 2023). Previous studies also reported a P200 repetition effect (Almeida & Poeppel, 2013). For instance, Rugg, Doyle, and Wells (1995) revealed that the repetition of words elicited prominent early frontal P200 effects commencing around 240 msec, whereas the repetition of pseudowords failed to generate such effects. ...
... The extent to which the effects of repetition are influenced by the probability of repetition is still controversial, especially concerning its temporal aspects. Despite previous EEG studies investigating RS effects on visual words (Eisenhauer et al., 2019;Almeida & Poeppel, 2013;Fiebach, Gruber, & Supp, 2005;Deacon, Dynowska, Ritter, & Grose-Fifer, 2004), less is known regarding whether and when RS is influenced by expectation or P(rep) in visual word processing. To the best of our knowledge, to date, no EEG studies have been conducted to examine the P(rep) effects during visual word processing within the specific context of Chinese reading. ...
... The latency and distribution of the second RS effects correspond to a P2-like component with bilateral posterior positivity. The P2-like word RS effects, showing a frontocentral distribution, is consistent with previous studies that reported frontal P2 effects for repetitions of orthographic strings and words (Almeida & Poeppel, 2013;Rugg et al., 1995;Nagy & Rugg, 1989). Masked priming experiments reported P2 enhancements for immediate word repetitions ( Woollams et al., 2008;Misra & Holcomb, 2003). ...
Article
Full-text available
Visual word recognition is commonly rapid and efficient, incorporating top–down predictive processing mechanisms. Neuroimaging studies with face stimuli suggest that repetition suppression (RS) reflects predictive processing at the neural level, as this effect is larger when repetitions are more frequent, that is, more expected. It remains unclear, however, at the temporal level whether and how RS and its modulation by expectation occur in visual word recognition. To address this gap, the present study aimed to investigate the presence and time course of these effects during visual word recognition using EEG. Thirty-six native Cantonese speakers were presented with pairs of Chinese written words and performed a nonlinguistic oddball task. The second word of a pair was either a repetition of the first or a different word (alternation). In repetition blocks, 75% of trials were repetitions and 25% were alternations, whereas the reverse was true in alternation blocks. Topographic analysis of variance of EEG at each time point showed robust RS effects in three time windows (141–227 msec, 242–445 msec, and 467–513 msec) reflecting facilitation of visual word recognition. Importantly, the modulation of RS by expectation was observed at the late rather than early intervals (334–387 msec, 465–550 msec, and 559–632 msec) and more than 100 msec after the first RS effects. In the predictive coding view of RS, only late repetition effects are modulated by expectation, whereas early RS effects may be mediated by lower-level predictions. Taken together, our findings provide the first EEG evidence revealing distinct temporal dynamics of RS effects and repetition probability on RS effects in visual processing of Chinese words.
... Some studies have raised controversies regarding the repetition enhancement effect on P200. For example, in the study of Almeida and Poeppel (2013), participants performed lexical decisions to real words and pseudowords, and pseudowords were all pronounceable and phonotactically legal strings. All of their stimuli were high orthographic neighborhood items, and the result demonstrated the interaction between lexicality (word vs. pseudoword) and stimuli repetition. ...
... However, some studies did not demonstrate the repetition effect on P200 in the reading of real words (Swick, 1998;Laszlo et al., 2012). The definition of lexicality in these studies was different from that in Almeida and Poeppel (2013). In Swick (1998), participants performed lexical decision to real words and non-words created by rearranging the sequence of letters in a word. ...
... The present study revealed increases in P200 associated with repetition for real characters. This result is consistent with previous reports showing a repetition enhancement effect on P200 activity for real words (Van Petten et al., 1991;Curran and Dien, 2003;Misra and Holcomb, 2003;Almeida and Poeppel, 2013). Furthermore, these repetition enhancement effects were found in IMF4, which manifested itself as alpha-band activity. ...
Article
Full-text available
Most studies on word repetition have demonstrated that repeated stimuli yield reductions in brain activity. Despite the well-known repetition reduction effect, some literature reports repetition enhancements in electroencephalogram (EEG) activities. However, although studies of object and face recognition have consistently demonstrated both repetition reduction and enhancement effects, the results of repetition enhancement effects were not consistent in studies of visual word recognition. Therefore, the present study aimed to further investigate the repetition effect on the P200, an early event-related potential (ERP) component that indexes the coactivation of lexical candidates during visual word recognition. To achieve a high signal-to-noise ratio, EEG signals were decomposed into various modes by using the Hilbert–Huang transform. Results demonstrated a repetition enhancement effect on P200 activity in alpha-band oscillation and that lexicality and orthographic neighborhood size would influence the magnitude of the repetition enhancement effect on P200. These findings suggest that alpha activity during visual word recognition might reflect the coactivation of orthographically similar words in the early stages of lexical processing. Meantime, there were repetition reduction effects on ERP activities in theta-delta band oscillation, which might index that the lateral inhibition between lexical candidates would be omitted in repetition.
... A first indication of semantic-free context effects was found in priming investigations (which mimic context-based predictions; see DeLong et al., 2014), demonstrating reliable priming effects for non-words for which prelexical but no semantic information exists. Note, however, that in some studies priming effects are stronger when lexical/semantic information is present, i.e., for words (e.g., Almeida and Poeppel, 2013;Ferrand and Grainger, 1992;Fiebach et al., 2005), while others found no differences between word and non-word priming (Deacon et al., 2004;Laszlo and Federmeier, 2007;Laszlo et al., 2012). Initial evidence for knowledge effects without semantics comes from non-word familiarization tasks, e.g., for left posterior regions using fMRI (Glezer et al., 2015;Xue and Poldrack, 2007) and for late positivities measured with ...
... Kretzschmar et al., 2015Penolazzi et al., 2007, respectively; knowledge was operationalized as word frequency in these studies). The same is true for word/pseudoword priming studies exploring context by knowledge interactions (e.g., Almeida and Poeppel, 2013 vs. Deacon et al., 2004;Laszlo and Federmeier, 2007;Laszlo et al., 2012, respectively). Here, . ...
... Previous mixed results from sentence studies may be due to more ambiguities arising from the gradual increase of semantic context as a sentence unfolds. In addition, priming studies that did not find a context by knowledge interaction (Deacon et al., 2004;Laszlo and Federmeier, 2007;Laszlo et al., 2012), in contrast to the present and other studies finding this interaction (Almeida and Poeppel, 2013; match of orthographic similarity based on bigram frequency) did not explicitly control for the orthographic similarity of words and non-words. Therefore, we claim that priming paradigms like we used here allow a more systematic way to investigate context by knowledge interactions as the context manipulation can be stringently controlled and the equalized orthographic similarity controls for a critical confounding variable. ...
Preprint
Full-text available
Word familiarity and predictive context facilitate visual word processing, leading to faster recognition times and reduced neuronal responses. Previously, models with and without top-down connections, including lexical-semantic, pre-lexical (e.g., orthographic/ phonological), and visual processing levels were successful in accounting for these facilitation effects. Here we systematically assessed context-based facilitation with a repetition priming task and explicitly dissociated pre-lexical and lexical processing levels using a pseudoword familiarization procedure. Experiment 1 investigated the temporal dynamics of neuronal facilitation effects with magnetoencephalography (MEG; N=38 human participants) while Experiment 2 assessed behavioral facilitation effects (N=24 human participants). Across all stimulus conditions, MEG demonstrated context-based facilitation across multiple time windows starting at 100 ms, in occipital brain areas. This finding indicates context based-facilitation at an early visual processing level. In both experiments, we furthermore found an interaction of context and lexical familiarity, such that stimuli with associated meaning showed the strongest context-dependent facilitation in brain activation and behavior. Using MEG, this facilitation effect could be localized to the left anterior temporal lobe at around 400 ms, indicating within-level (i.e., exclusively lexical-semantic) facilitation but no top-down effects on earlier processing stages. Increased pre-lexical familiarity (in pseudowords familiarized utilizing training) did not enhance or reduce context effects significantly. We conclude that context based-facilitation is achieved within visual and lexical processing levels. Finally, by testing alternative hypotheses derived from mechanistic accounts of repetition suppression, we suggest that the facilitatory context effects found here are implemented using a predictive coding mechanism. Significance Statement The goal of reading is to derive meaning from script. This highly automatized process benefits from facilitation depending on word familiarity and text context. Facilitation might occur exclusively within each level of word processing (i.e., visual, pre-lexical, and/or lexical-semantic) but could alternatively also propagate in a top-down manner from higher to lower levels. To test the relevance of these two alternative accounts at each processing level, we combined a pseudoword learning approach controlling for letter string familiarity with repetition priming. We found enhanced context-based facilitation at the lexical-semantic but not pre-lexical processing stage, and no evidence of top-down facilitation from lexical-semantic to earlier word recognition processes. We also identified predictive coding as the most likely mechanism underlying within-level context-based facilitation.
... Fig. 1b shows, in detail, the expected electrophysiological and behavioral responses reflecting context-based facilitation at visual, pre-lexical, and lexicalsemantic processing levels. For example, we expect an interaction of lexical-semantic familiarity (presence vs. absence of lexical-semantic information) and context (prime/without context vs. target/with context) reflected by a stronger activation decrease (repetition suppression) for words in contrast to meaningless pseudowords (as shown by, e.g., Almeida and Poeppel, 2013). If restricted to the N400 time window, this pattern would indicate that facilitation is implemented exclusively at the lexical-semantic level, whereas earlier effects would suggest top-down facilitation from lexical-semantic to earlier processing stages. ...
... Please note that in the following, for the sake of brevity, we will subsume the processing of words and pseudowords under the term visual word recognition, as we assume similar pre-lexical processing for words and novel pseudowords reflecting the orthographic familiarity (OLD20) match. The finding of stronger repetition suppression for words was consistent with previous studies (Almeida and Poeppel, 2013;Fiebach et al., 2005; but see Deacon et al., 2004;Laszlo and Federmeier, 2007;Laszlo et al., 2012) and identified sources of the effect were compatible with previous localizations of the N400 within the anterior temporal cortex (e.g., Lau et al., 2013a;Lau and Nguyen, 2015). In contrast, we could not identify a pre-lexical modulation (i.e., an increased reduction of activation for familiarized pseudowords) at any time window. ...
Article
Full-text available
Word familiarity and predictive context facilitate visual word processing, leading to faster recognition times and reduced neuronal responses. Previously, models with and without top-down connections, including lexical-semantic, pre-lexical (e.g., orthographic/phonological), and visual processing levels were successful in accounting for these facilitation effects. Here we systematically assessed context-based facilitation with a repetition priming task and explicitly dissociated pre-lexical and lexical processing levels using a pseudoword (PW) familiarization procedure. Experiment 1 investigated the temporal dynamics of neuronal facilitation effects with magnetoencephalography (MEG; N = 38 human participants), while experiment 2 assessed behavioral facilitation effects (N = 24 human participants). Across all stimulus conditions, MEG demonstrated context-based facilitation across multiple time windows starting at 100 ms, in occipital brain areas. This finding indicates context-based facilitation at an early visual processing level. In both experiments, we furthermore found an interaction of context and lexical familiarity, such that stimuli with associated meaning showed the strongest context-dependent facilitation in brain activation and behavior. Using MEG, this facilitation effect could be localized to the left anterior temporal lobe at around 400 ms, indicating within-level (i.e., exclusively lexical-semantic) facilitation but no top-down effects on earlier processing stages. Increased pre-lexical familiarity (in PWs familiarized utilizing training) did not enhance or reduce context effects significantly. We conclude that context-based facilitation is achieved within visual and lexical processing levels. Finally, by testing alternative hypotheses derived from mechanistic accounts of repetition suppression, we suggest that the facilitatory context effects found here are implemented using a predictive coding mechanism.
... In the past, studies using MEG signals have shown that there are two major effects seen in the brain when the same words are presented repeatedly. In Repetitive Enhancement (RE), the frontal regions in the brain get activated when the same word from an unknown language is presented to the subject multiple times [18] after which the activations drop leading to Repetitive Suppression (RS). The RS is also observed when a word familiar to the subject is presented. ...
... In order to further analyze the inter-trial distances, the trials are broken down into two phases as before-Phase-I (trials 1-10) and Phase-II (trials [11][12][13][14][15][16][17][18][19][20]. The mean of the inter-trial distances in Phase-I (denoted as d 1 ) and the Phase-II (denoted as d 2 ) are calculated. ...
Article
Full-text available
This paper presents an experimental study to understand the key differences in the neural representations when the subject is presented with speech signals of a known and an unknown language and to capture the evolution of neural responses in the brain for a language learning task. In this study, electroencephalography (EEG) signals were recorded while the human subjects listened to a given set of words from English (familiar language) and Japanese (unfamiliar language). The subjects also provided behavioural signals in the form of spoken audio for each input audio stimuli. In order to quantify the representation level differences for the auditory stimuli of two languages, we use a classification approach to discriminate the two languages from the EEG signal recorded during listening phase by designing an off-line classifier. These experiments reveal that the time-frequency features along with phase contain significant language discriminative information. The language discrimination is further confirmed with a second subsequent experiment involving Hindi (the native language of the subjects) and Japanese (unknown language). A detailed analysis is performed on the recorded EEG signals and the audio signals to further understand the language learning processes. A pronunciation rating technique on the spoken audio data confirms the improvement of pronunciation over the course of trials for the Japanese language. Using single trial analysis, we find that the EEG representations also attain a level of consistency indicating a pattern formation. The brain regions responsible for language discrimination and learning are identified based on EEG channel locations and are found to be predominantly in the frontal region.
... As we had clear predictions about the time course of the expected effects, we restricted our analysis to the time window between 100 and 500 msec post-stimulus onset. Previous electrophysiological research has indeed shown that this period covers the N/P150, N2, and N400m effects (e.g., Almeida & Poeppel, 2013;Duñabeitia et al., 2012). ...
... For the MEG analyses, 24 participants were therefore be recruited. Note however that this number of participants planned is larger than what is typically reported in recent visual word recognition MEG studies (i.e., 12e20, Almeida & Poeppel, 2013;Assadollahi & Pulvermü ller, 2003;Chen et al., 2013Chen et al., , 2015Cornelissen et al., 2009;Hauk et al., 2012;Pylkk€ anen & Okano, 2010;Simos et al., 2009;Tsigka, Papadelis, Braun, & Miceli, 2014). r e f e r e n c e s ...
... Like the late repetition effect, this early response in the P200 time window seems to index mechanisms underlying lexical processing. Indeed, Almeida and Poeppel (2013) observed a repetition effect on this time window whose amplitude in response to words differed from pseudowords. Therefore, the ERP word repetition effect appears to be a good tool to investigate lexical processing during two different time windows. ...
... We observed an early repetition effect only with the auditory-only modality. This repetition effect over the P200 time window has been linked to early stages of lexical processing, as words and not pseudowords elicit this effect (Almeida & Poeppel, 2013). However, ERP responses over both the P200 and the N400 time windows are frequently observed during word/sentence processing as these components reflect different underlying processes. ...
Article
Numerous studies suggest that audiovisual speech influences lexical processing. However, it is not clear which stages of lexical processing are modulated by audiovisual speech. In this study, we examined the time course of the access to word representations in long-term memory when they were presented in auditory-only and audiovisual modalities. We exploited the effect of the prior access to a word on the subsequent access to that word known as the word repetition effect. Using event-related potentials, we identified an early time window at about 200 milliseconds and a late time window starting at about 400 milliseconds related to the word repetition effect. Our results showed that the word repetition effect over the early time window was modulated by the speech modality while this influence of speech modality was not found over the late time window. Visual cues thus play a role in the early stages of lexical processing.
... To our knowledge, the P2 has not been extensively studied within the sentence processing or syntactic violation literature. Instead, the P2 has primarily been investigated within the lexical processing literature, where the P2 has been linked to prediction/expectation mismatches (see Almeida & Poeppel, 2013, for some discussion). Given that, and given that Neville et al. (1991) do not provide a functional interpretation of the P2 in their discussion, for this study, we will tentatively interpret the P2 as indexing prediction/expectation mismatches and note that there is need for future work investigating the functional interpretation of the P2 in sentence processing. ...
Article
Full-text available
In principle, functional neuroimaging provides uniquely informative data in addressing linguistic questions, because it can indicate distinct processes that are not apparent from behavioral data alone. This could involve adjudicating the source of unacceptability via the different patterns of elicited brain responses to different ungrammatical sentence types. However, it is difficult to interpret brain activations to syntactic violations. Such responses could reflect processes that have nothing intrinsically related to linguistic representations, such as domain-general executive function abilities. To facilitate the potential use of functional neuroimaging methods to identify the source of different syntactic violations, we conducted a functional magnetic resonance imaging experiment to identify the brain activation maps associated with two distinct syntactic violation types: phrase structure (created by inverting the order of two adjacent words within a sentence) and subject islands (created by extracting a wh-phrase out of an embedded subject). The comparison of these violations to control sentences surprisingly showed no indication of a generalized violation response, with almost completely divergent activation patterns. Phrase structure violations seemingly activated regions previously implicated in verbal working memory and structural complexity in sentence processing, whereas the subject islands appeared to activate regions previously implicated in conceptual-semantic processing, broadly defined. We review our findings in the context of previous research on syntactic and semantic violations using ERPs. Although our results suggest potentially distinct underlying mechanisms underlying phrase structure and subject island violations, our results are tentative and suggest important methodological considerations for future research in this area.
... Likewise, a number of studies have investigated the nature of the N400 component in word processing (Kutas & Federmeier, 2011). There is a long history of investigating the link between specific ERP components and specific stages of sensory and cognitive processing (for review, see Almeida & Poeppel, 2013;Carreiras et al., 2014;Grainger & Holcomb, 2009;Sprouse & Almeida, 2023). ...
Article
Full-text available
A key aspect of efficient visual processing is to use current and previous information to make predictions about what we will see next. In natural viewing, and when looking at words, there is typically an indication of forthcoming visual information from extrafoveal areas of the visual field before we make an eye movement to an object or word of interest. This “preview effect” has been studied for many years in the word reading literature and, more recently, in object perception. Here, we integrated methods from word recognition and object perception to investigate the timing of the preview on neural measures of word recognition. Through a combined use of EEG and eye-tracking, a group of multilingual participants took part in a gaze-contingent, single-shot saccade experiment in which words appeared in their parafoveal visual field. In valid preview trials, the same word was presented during the preview and after the saccade, while in the invalid condition, the saccade target was a number string that turned into a word during the saccade. As hypothesized, the valid preview greatly reduced the fixation-related evoked response. Interestingly, multivariate decoding analyses revealed much earlier preview effects than previously reported for words, and individual decoding performance correlated with participant reading scores. These results demonstrate that a parafoveal preview can influence relatively early aspects of post-saccadic word processing and help to resolve some discrepancies between the word and object literatures.
... The N400 component is typically sensitive to lexical and semantic processing (Kutas & Federmeier, 2011;Kutas & Hillyard, 1980). Previous studies showed strong effects when extracting meaning from perceived words (Eisenhauer, Fiebach, & Gagl, 2019;Eisenhauer et al., 2022;Almeida & Poeppel, 2013;Fiebach, Gruber, & Supp, 2005;Dufau, Grainger, Midgley, & Holcomb, 2015) and capturing an implicit, probabilistic representation of sentence comprehension (Rabovsky, Hansen, & McClelland, 2018). The observation that the new oPE formulation, including more precise representations, was the adequate predictor for brain potentials in this time window may indicate the role of the representation as a key to word meaning . ...
Preprint
Full-text available
Recent evidence suggests that readers optimize low-level visual information following the principles of predictive coding. Based on a transparent neurocognitive model, we postulated that readers optimize their percept by removing redundant visual signals, which allows them to focus on the informative aspects of the sensory input, i.e., the orthographic prediction error (oPE). Here, we test alternative oPE implementations by assuming all-or-nothing signaling units based on multiple thresholds and compare them to the original oPE implementation. For model evaluation, we implemented the comparison based on behavioral and electrophysiological data (EEG at 230, 430 ms). We found the highest model fit for the oPE with a 50% threshold integrating multiple prediction units for behavior and the late EEG component. The early EEG component was still explained best by the original hypothesis. In the final evaluation, we used image representations of both oPE implementations as input to a deep-neuronal network model (DNN). We compared the lexical decision performance of the DNN in two tasks (words vs. consonant strings; words vs. pseudowords) to the performance after training with unaltered word images and found better DNN performance when trained with the 50% oPE representations in both tasks. Thus, the new formulation is adequate for late but not early neuronal signals and lexical decision behavior in humans and machines. The change from early to late neuronal processing likely reflects a transformation in the representational structure over time that relates to accessing the meaning of words.
... These prime-target differences also involved qualitative changes, with larger activations for words versus pseudowords at the prime and the inverse effect at the target. These findings suggest that context-dependent facilitation is implemented at orthographic and lexical-semantic processing levels (e.g., Almeida & Poeppel, 2013;Brothers et al., 2015). ...
Article
Full-text available
To a crucial extent, the efficiency of reading results from the fact that visual word recognition is faster in predictive contexts. Predictive coding models suggest that this facilitation results from pre-activation of predictable stimulus features across multiple representational levels before stimulus onset. Still, it is not sufficiently understood which aspects of the rich set of linguistic representations that are activated during reading—visual, orthographic, phonological, and/or lexical-semantic—contribute to context-dependent facilitation. To investigate in detail which linguistic representations are pre-activated in a predictive context and how they affect subsequent stimulus processing, we combined a well-controlled repetition priming paradigm, including words and pseudowords (i.e., pronounceable nonwords), with behavioral and magnetoencephalography measurements. For statistical analysis, we used linear mixed modeling, which we found had a higher statistical power compared to conventional multivariate pattern decoding analysis. Behavioral data from 49 participants indicate that word predictability (i.e., context present vs. absent) facilitated orthographic and lexical-semantic, but not visual or phonological processes. Magnetoencephalography data from 38 participants show sustained activation of orthographic and lexical-semantic representations in the interval before processing the predicted stimulus, suggesting selective pre-activation at multiple levels of linguistic representation as proposed by predictive coding. However, we found more robust lexical-semantic representations when processing predictable in contrast to unpredictable letter strings, and pre-activation effects mainly resembled brain responses elicited when processing the expected letter string. This finding suggests that pre-activation did not result in “explaining away” predictable stimulus features, but rather in a “sharpening” of brain responses involved in word processing.
... The word repetition effect is the finding that prior processing of words, but not nonwords, facilitates subsequent processing of those same words (e.g., participants will identify a word faster the second time it is presented; e.g., Forbach et al., 1974). The P200 ERP component is known to be modulated by word repetition (e.g., Almeida & Poeppel, 2013). However, Basirat et al. (2018) found that this repetition effect on the P200 interacted with multisensory context. ...
Article
Full-text available
Speech selective adaptation is a phenomenon in which repeated presentation of a speech stimulus alters subsequent phonetic categorization. Prior work has reported that lexical, but not multisensory, context influences selective adaptation. This dissociation suggests that lexical and multisensory contexts influence speech perception through separate and independent processes (see Samuel & Lieblich, 2014). However, this dissociation is based on results reported by different studies using different stimuli. This leaves open the possibility that the divergent effects of multisensory and lexical contexts on selective adaptation may be the result of idiosyncratic differences in the stimuli rather than separate perceptual processes. The present investigation used a single stimulus set to compare the selective adaptation produced by lexical and multisensory contexts. In contrast to the apparent dissociation in the literature, we find that multisensory information can in fact support selective adaptation. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
... These prime-target differences also involved qualitative changes, with larger activations for words vs. pseudowords at the prime and the inverse effect at the target. These findings suggest that context-dependent facilitation is implemented at orthographic and lexical-semantic processing levels (e.g., Almeida and Poeppel, 2013;Brothers et al., 2015). ...
Preprint
Full-text available
To a crucial extent, the efficiency of reading results from the fact that visual word recognition is faster in predictive contexts. Predictive coding models suggest that this facilitation results from pre-activation of predictable stimulus features across multiple representational levels before stimulus onset. Still, it is not sufficiently understood which aspects of the rich set of linguistic representations that are activated during reading – visual, orthographic, phonological, and/or lexical-semantic – contribute to context-dependent facilitation. To investigate in detail which linguistic representations are pre-activated in a predictive context and how they affect subsequent stimulus processing, we combined a well-controlled repetition priming paradigm, including words and pseudowords (i.e., pronounceable nonwords), with behavioral and magnetoencephalography measurements. For statistical analysis, we used linear mixed modeling, which we found had a higher statistical power compared to conventional multivariate pattern decoding analysis. Behavioral data from 49 participants indicate that word predictability (i.e., context present vs. absent) facilitated orthographic and lexical-semantic, but not visual or phonological processes. Magnetoencephalography data from 38 participants show sustained activation of orthographic and lexical-semantic representations in the interval before processing the predicted stimulus, suggesting selective pre-activation at multiple levels of linguistic representation as proposed by predictive coding. However, we found more robust lexical-semantic representations when processing predictable in contrast to unpredictable letter strings, and pre-activation effects mainly resembled brain responses elicited when processing the expected letter string. This finding suggests that pre-activation did not result in ‘explaining away’ predictable stimulus features, but rather in a ‘sharpening’ of brain responses involved in word processing.
... The healthy subjects had differences in the gamma range when reading words and pseudowords: ERS of gamma was greater when reading words than pseudowords (p<0.0001). This time interval can be associated with a lexical access (Almeida et al., 2013) and stimulus categorization process (Pernet et al., 2003). In particular, a power increase in the gamma frequency band was demonstrated in the perception of words related to the meaning of the proposed sentence (Rommers et al., 2013). ...
Preprint
Full-text available
We studied the evoked changes in brain rhythmic activity during reading semantic (words) and meaningless verbal information (pseudowords) in patients with paranoid schizophrenia (n=40) and in control group of healthy subjects (n=64). Patients with paranoid schizophrenia showed the decrease in the event-related synchronization (ERS) of alpha and theta rhythms compared to controls when reading semantic verbal information. Reduced event-related synchronization of the alpha rhythm in time window 105 - 145 ms may be associated with the severity of hallucinations (P3 scale) by PANSS. In contrast to the control group, the patients had no differences in the gamma range when reading words and pseudowords. This fact may indicate that patients with paranoid schizophrenia assign significance to those stimuli that are not normally considered as significant (pseudowords).
... The neural network of the M3 component time-window at source levels showed predominant connections between Wernicke's area and Broca's area in the same hemisphere or across the hemispheres. Components at this time-window may be sensitive to the lexical access (Almeida and Poeppel, 2013;Whiting et al., 2015). In the age group of 6-9 years, neural network analyses revealed that 13 out of the 20 subjects (65%) had network connections between the left and right Wernicke's areas and/or Broca's area (bilateral networks); the other 7 subjects in this group showed unilateral networks. ...
Article
The brain undergoes enormous changes during childhood. Little is known about how the brain develops to serve word processing. The objective of the present study was to investigate the maturational changes of word processing in children and adolescents using magnetoencephalography (MEG). Responses to a word processing task were investigated in sixty healthy participants. Each participant was presented with simultaneous visual and auditory word pairs in "match" and "mismatch" conditions. The patterns of neuromagnetic activation from MEG recordings were analyzed at both sensor and source levels. Topography and source imaging revealed that word processing transitioned from bilateral connections to unilateral connections as age increased from 6 to 17 years old. Correlation analyses of language networks revealed that the path length of word processing networks negatively correlated with age (r = -0.833, p < 0.0001), while the connection strength (r = 0.541, p < 0.01) and the clustering coefficient (r = 0.705, p < 0.001) of word processing networks were positively correlated with age. In addition, males had more visual connections, whereas females had more auditory connections. The correlations between gender and path length, gender and connection strength, and gender and clustering coefficient demonstrated a developmental trend without reaching statistical significance. The results indicate that the developmental trajectory of word processing is gender specific. Since the neuromagnetic signatures of these gender-specific paths to adult word processing were determined using non-invasive, objective, and quantitative methods, the results may play a key role in understanding language impairments in pediatric patients in the future.
... Because of possible interactions between the neural response magnitude and neural source distribution at the sensor level 22,23 , a multivariate measurement technique ('angle test of response similarity') was implemented to assess the topographical similarity between the auditory responses in the three conditions (loud, soft and no imagery). This technique allows the assessment of spatial similarity in electrophysiological studies regardless of the response magnitude, and estimates the similarities in the distribution of underlying neural sources (for example, refs 5,39,[46][47][48][49][50][51]. Using this method, each topographical pattern is considered as a high-dimensional vector, where the number of dimensions equals the number of sensors in recording. ...
Article
Full-text available
The way top-down and bottom-up processes interact to shape our perception and behaviour is a fundamental question and remains highly controversial. How early in a processing stream do such interactions occur, and what factors govern such interactions? The degree of abstractness of a perceptual attribute (for example, orientation versus shape in vision, or loudness versus sound identity in hearing) may determine the locus of neural processing and interaction between bottom-up and internal information. Using an imagery-perception repetition paradigm, we find that imagined speech affects subsequent auditory perception, even for a low-level attribute such as loudness. This effect is observed in early auditory responses in magnetoencephalography and electroencephalography that correlate with behavioural loudness ratings. The results suggest that the internal reconstruction of neural representations without external stimulation is flexibly regulated by task demands, and that such top-down processes can interact with bottom-up information at an early perceptual stage to modulate perception.
... Brain-imaging studies (ERP and MEG) are informative as to the duration of this settling process. The earliest effects of lexical attributes can be detected in the N170 / M150, while the strongest lexical effects emerge in the N400 / M350 (e.g., Almeida & Poeppel, 2013;Assadollahi & Pulvermüller, 2003;Carreiras, Armstrong, Perea, & Frost, 2014;Eberhard-Moscicka, Jost, Fehlbaum, Pfenninger, & Maurer, 2016;Hauk, Davis, Ford, Pulvermüller, & Marslen-Wilson, 2006). Hence, after the initial activation of OWFs at ~150 ms post-stimulus, settling takes about 200 ms. ...
Article
Full-text available
Most researchers who study visual word recognition assume that an inhibitory length effect (slower Reaction Times for longer words) indicates serial letter processing, while absence of a length effect indicates parallel letter processing. This article discusses why the latter assumption is incorrect. In particular, the SERIOL and SERIOL2 models of orthographic processing imply that, for a specific stimulus configuration, the proposed serial letter processing should yield faster Reaction Times for longer words in the lexical-decision task. Published experimental data confirm this surprising implication, providing strong support for the serialisation mechanism of the SERIOL models.
... (See http://www.fieldtriptoolbox.org/faq/how_can_i_test_an_ interaction_effect_using_cluster-based_permutation_tests regarding the coding of factorial interactions in FieldTrip; for a similar analysis see [30]). ...
Article
Full-text available
Sentence-initial temporal clauses headed by before, as in "Before the scientist submitted the paper, the journal changed its policy", have been shown to elicit sustained negative-going brain potentials compared to maximally similar clauses headed by after, as in "After the scientist submitted the paper, the journal changed its policy". Such effects may be due to either one of two potential causes: before clauses may be more difficult than after clauses because they cause the two events in the sentence to be mentioned in an order opposite the order in which they actually occurred, or they may be more difficult because they are ambiguous with regard to whether the event described in the clause actually happened. The present study examined the effect of before and after clauses on sentence processing in both sentence-initial contexts, like those above, and in sentence-final contexts ("The journal changed its policy before/after the scientist submitted the paper"), where an order-of-mention account of the sustained negativity predicts a negativity for after relative to before. There was indeed such a reversal, with before eliciting more negative brain potentials than after in sentence-initial clauses but more positive in sentence-final clauses. The results suggest that the sustained negativity indexes processing costs related to comprehending events that were mentioned out of order.
... Priming paradigms have been used in numerous neurobiological studies using EEG (Jouravlev, Lupker, and Jared, 2014;Llorens et al., 2014;Riès et al., 2015), MEG (Brennan et al., 2014;Whiting, Shtyrov, and Marslen-Wilson, 2014), and fMRI (Almeida and Poeppel, 2013;Massol et al., 2010;Savill & Thierry, 2011). EEG and MEG studies can offer precise information about the time course of prime and target processing. ...
Chapter
Full-text available
In word priming and interference studies, researchers typically present participants with pairs of words (called primes and targets) and assess how the processing of the targets (e.g. "nurse") is affected by different types of primes (e.g., semantically related and unrelated primes, such as "doctor" and "spoon"). Priming and interference paradigms have been used to study a broad range of issues concerning the structure of the mental lexicon and the ways linguistic representations are accessed during word comprehension and production. In this chapter, we illustrate the use of the paradigms in two exemplary studies, and then discuss the factors researchers need to take into account when selecting their stimuli, designing their experiments, and analyzing the results.
... The massive repetition of words from targets and repaired foils in our experiment effectively increases their frequency. The P2 has also been related to lexical access (Almeida & Poeppel, 2013), which is consistent with our interpretation of the second bump as reflecting the signal to retrieve word meaning. ...
Article
Full-text available
We introduce a method for measuring the number and durations of processing stages from the electroencephalographic signal and apply it to the study of associative recognition. Using an extension of past research that combines multivariate pattern analysis with hidden semi-Markov models, the approach identifies on a trial-by-trial basis where brief sinusoidal peaks (called bumps) are added to the ongoing electroencephalographic signal. We propose that these bumps mark the onset of critical cognitive stages in processing. The results of the analysis can be used to guide the development of detailed process models. Applied to the associative recognition task, the hidden semi-Markov models multivariate pattern analysis method indicates that the effects of associative strength and probe type are localized to a memory retrieval stage and a decision stage. This is in line with a previously developed the adaptive control of thought-rational process model, called ACT-R, of the task. As a test of the generalization of our method we also apply it to a data set on the Sternberg working memory task collected by Jacobs, Hwang, Curran, and Kahana (2006). The analysis generalizes robustly, and localizes the typical set size effect in a late comparison/decision stage. In addition to providing information about the number and durations of stages in associative recognition, our analysis sheds light on the event-related potential components implicated in the study of recognition memory. (PsycINFO Database Record
... The M400 response was quantified using a canonical 200 ms window between 300 and 500 ms (e.g. Almeida & Poeppel, 2013;Lau et al., 2009). The cosine similarity between the topography of the peak of each early evoked response and the other time points within their respective temporal windows was larger than .95, ...
Article
Full-text available
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
... Critically, a non-predictive account must assume that access to the contents of lexical information is ordered, such that category information is accessed earlier than the subcategorization property of the verb. However, as yet there is little evidence to support such ordered access to category vs. other contents of a verb (Farmer et al., 2006 is one rare case, but see Staub et al., 2009 for a counterargument), whereas there is an abundance of psycholinguistic and neurolinguistic research demonstrating extremely fast access to all aspects of lexical content (e.g., Federmeier et al., 2000;Dambacher et al., 2006;Hauk et al., 2006;Staub and Rayner, 2007;Tanenhaus, 2007;Almeida and Poeppel, 2013;Chow et al., 2014). Moreover, there has been a recent surge of empirical work demonstrating that structure building processes can proceed predictively based on various types of top-down linguistic and contextual information, as discussed above (e.g., Konieczny, 2000;Kamide et al., 2003;DeLong et al., 2005;Van Berkum et al., 2005;Lau et al., 2006;Staub and Clifton, 2006;Levy and Keller, 2013;Yoshida et al., 2013;Yoshida, unpublished doctoral dissertation), including access to transitivity information (Arai and Keller, 2013). ...
Article
Full-text available
Much work has demonstrated that speakers of verb-final languages are able to construct rich syntactic representations in advance of verb information. This may reflect general architectural properties of the language processor, or it may only reflect a language-specific adaptation to the demands of verb-finality. The present study addresses this issue by examining whether speakers of a verb-medial language (English) wait to consult verb transitivity information before constructing filler-gap dependencies, where internal arguments are fronted and hence precede the verb. This configuration makes it possible to investigate whether the parser actively makes representational commitments on the gap position before verb transitivity information becomes available. A key prediction of the view that rich pre-verbal structure building is a general architectural property is that speakers of verb-medial languages should predictively construct dependencies in advance of verb transitivity information, and therefore that disruption should be observed when the verb has intransitive subcategorization frames that are incompatible with the predicted structure. In three reading experiments (self-paced and eye-tracking) that manipulated verb transitivity, we found evidence for reading disruption when the verb was intransitive, although no such reading difficulty was observed when the critical verb was embedded inside a syntactic island structure, which blocks filler-gap dependency completion. These results are consistent with the hypothesis that in English, as in verb-final languages, information from preverbal noun phrases is sufficient to trigger active dependency completion without having access to verb transitivity information.
... There are, of course, many ways to illustrate the progress that has been made, highlighting new ideas and directions. One approach would be to review the different aspects or levels of language processing that have been examined in new neuroscientific experimentation, i.e. phonetics, phonology [5,6], lexical access [7][8][9][10], lexical semantics [11], syntax [12,13], compositional semantics [14,15], discourse representation [16,17]; moreover, the interaction of the linguistic computational system with other domains has been investigated in interesting ways, including how language processing interfaces with attention [18], memory [19], emotion [20], cognitive control [21], predictive coding [22][23][24], and even aesthetics [25]. A different approach is taken here, focusing first on the revised spatial map of brain and language; then, narrowing to one functional problem, a new 'temporal view' is discussed to illustrate a linking hypothesis between the computational requirements of speech perception and the neurobiological infrastructure that may provide a neural substrate. ...
Article
New tools and new ideas have changed how we think about the neurobiological foundations of speech and language processing. This perspective focuses on two areas of progress. First, focusing on spatial organization in the human brain, the revised functional anatomy for speech and language is discussed. The complexity of the network organization undermines the well-regarded classical model and suggests looking for more granular computational primitives, motivated both by linguistic theory and neural circuitry. Second, focusing on recent work on temporal organization, a potential role of cortical oscillations for speech processing is outlined. Such an implementational-level mechanism suggests one way to deal with the computational challenge of segmenting natural speech.
Preprint
Full-text available
Literate humans can effortlessly interpret tens of thousands of words, even when the words are sometimes written incorrectly. This phenomenon suggests a flexible nature of reading that can endure a certain amount of noise. In this study, we investigated where and when brain responses diverged for conditions where misspelled words were resolved as real words or not. We used magnetoencephalography (MEG) to track the cortical activity as the participants read words with different degrees of misspelling that were perceived to range from real words to complete pseudowords, as confirmed by their behavioral responses. In particular, we were interested in how lexical information survives (or not) along the uncertainty spectrum, and how the corresponding brain activation patterns evolve spatiotemporally. We identified three brain regions that were notably modulated by misspellings: left ventral occipitotemporal cortex (vOT), superior temporal cortex (ST), and precentral cortex (pC). This suggests that resolving misspelled words into stored concepts involves an interplay between orthographic, semantic, and phonological processing. Temporally, these regions showed fairly late and sustained responses selectively to misspelled words. Specifically, an increasing level of misspelling increased the response in ST from 300 ms after stimulus onset; a functionally fairly similar but weaker effect was observed in pC. In vOT, misspelled words were sharply distinguished from real words notably later, after 700 ms. A linear mixed effects (LME) analysis further showed that pronounced and long-lasting misspelling effects appeared first in ST and then in pC, with shorter-lasting activation also observed in vOT. We conclude that reading misspelled words engages brain areas typically associated with language processing, but in a manner that cannot be interpreted merely as a rapid feedforward mechanism. Instead, feedback interactions likely contribute to the late effects observed during misspelled-word reading.
Article
Full-text available
Semantic processing is the ability to discern and maintain conceptual relationships among words and objects. While the neural circuits serving semantic representation and controlled retrieval are well established, the neuronal dynamics underlying these processes are poorly understood. Herein, we examined 25 healthy young adults who completed a semantic relation word-matching task during magnetoencephalography (MEG). MEG data were examined in the time–frequency domain and significant oscillatory responses were imaged using a beamformer. Whole-brain statistical analyses were conducted to compare semantic-related to length-related neural oscillatory responses. Time series were extracted to visualize the dynamics and were linked to task performance using structural equation modeling. The results indicated that participants had significantly longer reaction times in semantic compared to length trials. Robust MEG responses in the theta (3–6 Hz), alpha (10–16 Hz), and gamma (64–76 Hz and 64–94 Hz) bands were observed in parieto-occipital and frontal cortices. Whole-brain analyses revealed stronger alpha oscillations in a left-lateralized network during semantically related relative to length trials. Importantly, stronger alpha oscillations in the left superior temporal gyrus during semantic trials predicted faster responses. These data reinforce existing literature and add novel temporal evidence supporting the executive role of the semantic control network in behavior.
Article
Full-text available
Psycholinguistic research on the processing of morphologically complex words has largely focused on debates about how/if lexical stems are recognized, stored and retrieved. Comparatively little processing research has investigated similar issues for functional affixes. In Word or Lexeme Based Morphology (Aronoff, 1994), affixes are not representational units on par with stems or roots. This view is in stark contrast to the claims of linguistic theories like Distributed Morphology (Halle & Marantz, 1993), which assign rich representational content to affixes. We conducted a series of eight visual lexical decision studies, evaluating effects of derivational affix priming along with stem priming, identity priming, form priming and semantic priming at long and short lags. We find robust and consistent affix priming (but not semantic or form priming) with lags up to 33 items, supporting the position that affixes are morphemes, i.e., representational units on par with stems. Intriguingly, we find only weaker evidence for the long-lag stem priming effect found in other studies. We interpret this asymmetry in terms of the salience of different morphological contexts for recollection memory.
Article
We identified a potential neurophysiological marker for processing of verbal cues in paranoid schizophrenia: high desynchronization in the beta-2 band in the right parietal area for meaningless cues, and no synchronization differences in the beta-2 and gamma bands in the left prefrontal area pointing to deficient categorization of the stimuli.
Conference Paper
We measured magnetic cortical response to words and analyzed its variability in individual trials. Considerable variations of the amplitudes of different components of the response in different trials suggest hopping of active spot from one point to another within a cortical area responsible for a certain processing stage. This behavior is similar to that observed in experiments with voluntary movement. We believe that only a small fraction of cortical area involved in a certain neural function is active during any particular trial. Next time another spot in the area will be activated. This is the reason of high variability of magnetic signals which are extremely sensitive to the position of the active spot on the folded cortical surface.
Article
Speech perception refers to the suite of (neural, computational, cognitive) operations that transform auditory input signals into representations that can make contact with internally stored information: the words in a listener’s mental lexicon. Speech perception is typically studied using single speech sounds (e.g., vowels or syllables), spoken words, or connected speech. Based on neuroimaging, lesion, and electrophysiological data, dual stream neurocognitive models of speech perception have been proposed that identify ventral stream (mapping from sound to meaning) and dorsal stream functions (mapping from sound articulation). Major outstanding research questions include cerebral lateralization, the role of neuronal oscillations, and the contribution of top-down, abstract knowledge in perception.
Article
Full-text available
A perceptual-identification task was used to assess priming for words and pseudowords that in their upper- and lowercase formats share either few (high-shift items) or many (low-shift items) visual features. Equivalent priming was obtained for high-shift words repeated in the same case and in a different case, and this priming was greatly reduced when there was a study-test modality shift. Accordingly, the class-case priming was mediated in large part by modality-specific perceptual codes. By contrast, priming for high-shift pseudowords was greatly reduced following the case manipulation, as was so for high-shift words when they were randomly intermixed with pseudowords. Low-shift items were not affected by the case manipulation. On the basis of the overall pattern of results, the author argues that different mechanisms mediate priming for words and pseudowords and that J. Morton (1979) was essentially correct in his characterization of word priming.
Article
Full-text available
Recent PET studies have suggested a specific anatomy for feature identification, visual word forms and semantic associations. Our studies seek to explore the time course of access to these systems by use of reaction time and scalp electrical recording. Target detection times suggest that different forms of representation are involved in the detection of letter features, feature conjunctions (letters), and words. Feature search is fastest at the fovea and slows symmetrically with greater foveal eccentricity. It is not influenced by lexicality. Detecting a letter case (conjunction) shows a left to right search which differs between words and consonant strings. Analysis of scalp electrical distribution suggest an occipito-temporal distribution for the analysis of visual features (right sided) and for the visual word form discrimination (left sided). These fit with the PET results, and suggest that the feature related analysis begins within the first 100 millisec and the visual word form discriminates words from strings by about 200 msec. Lexical decision instructions can modify the computations found in both frontal and posterior areas.
Article
Full-text available
The structure of lexical entries and the status of lexical decomposition remain controversial. In the psycholinguistic literature, one aspect of this debate concerns the psychological reality of the morphological complexity difference between compound words (teacup) and single words (crescent). The present study investigates morphological decomposition in compound words using visual lexical decision with simultaneous magnetoencephalography (MEG), comparing compounds, single words, and pseudomorphemic foils. The results support an account of lexical processing which includes early decomposition of morphologically complex words into constituents. The behavioural differences suggest internally structured representations for compound words, and the early effects of constituents in the electrophysiological signal support the hypothesis of early morphological parsing. These findings add to a growing literature suggesting that the lexicon includes structured representations, consistent with previous findings supporting early morphological parsing using other tasks. The results do not favour two putative constraints, word length and lexicalisation, on early morphological-structure based computation.
Article
Full-text available
Subjects performed a nonword detection task, in which they responded to occasional nonwords embedded in a series of words. The stimuli were equally likely to be presented in the visual or auditory modality. Some of the words were repetitions of items that had occurred six items previously. Repetitions were in either the same or in the alternative modality.Compared to event-related potentials (ERPs) evoked by unrepeated, visually presented words, visual-visual repetitions gave rise to a sustained positive shift, which onset around 250 msec. Auditory-visual repetition also gave rise to a positive shift. This onset some 100 msec later than that associated with within-modality repetition. For auditory ERPs, within- and across-modality repetition gave rise to almost identical effects, consisting of a sustained positive-going shift onsetting around 400 msec.The findings were interpreted as reflecting the different representations generated by visually and auditorily presented words. Whereas visually presented words lead to the generation of both orthographic and phonological representations, auditory input leads solely to the generation of phonological representations.
Article
Full-text available
The study of eye movements has become a well established and widely used methodology in experimental reading research. This Introduction provides a survey of some key methodological issues, followed by a discussion of major trends in the development of theories and models of eye movement control in fluent reading. Among the issues to be considered in future research are problems of methodology, a stronger grounding in basic research, integration with the neighbouring area of research on single word recognition, more systematic approaches to model evaluation and comparison, and more work on individual variation and effects of task demands in reading.
Article
Full-text available
This article presents a theory in which automatization is construed as the acquisition of a domain-specific knowledge base, formed of separate representations, instances, of each exposure to the task. Processing is considered automatic if it relies on retrieval of stored instances, which will occur only after practice in a consistent environment. Practice is important because it increases the amount retrieved and the speed of retrieval; consistency is important because it ensures that the retrieved instances will be useful. The theory accounts quantitatively for the power-function speed-up and predicts a power-function reduction in the standard deviation that is constrained to have the same exponent as the power function for the speed-up. The theory accounts for qualitative properties as well, explaining how some may disappear and others appear with practice. More generally, it provides an alternative to the modal view of automaticity. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Conducted 5 reaction time (RT) experiments with 75 undergraduates to explore word-frequency effects in word-nonword decision tasks and in pronunciation and memory tasks. High-frequency words were recognized substantially faster than low-frequency words in the word-nonword decision tasks. However, there was little effect of word frequency in the pronunciation and old-new memory tasks. Further, in the word-nonword lexical decision task, prior presentations of words produced substantial and apparently long-lasting reductions on the basic frequency effect. The occurrence of natural language frequency effects only in the word-nonword decision task supported the use of this task to study the organization of and retrieval from the subjective lexicon. The modification of frequency effects by repetition suggested that natural language frequency effects may be attributed partly to the recency with which words have occurred. Analysis of the response latencies using S. Sternberg's (see record 1970-11748-001) additive-factors approach indicated that frequency effects consist of both effects in encoding and in retrieval from memory. (34 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Recent masked priming studies on visual word recognition have suggested that morphological decomposition is performed prelexically, purely on the basis of the orthographic properties of the word form. Given this, one might expect morphological complexity to modulate early visual evoked activity in electromagnetic measures. We investigated the neural bases of morphological decomposition with magnetoencephalography (MEG). In two experiments, we manipulated morphological complexity in single word lexical decision without priming, once using suffixed words and once using prefixed words. We found that morphologically complex forms display larger amplitudes in the M170, the same component that has been implicated for letterstring and face effects in previous MEG studies. Although letterstring effects have been reported to be left-lateral, we found a right-lateral effect of morphological complexity, suggesting that both hemispheres may be involved in early analysis of word forms.
Article
Full-text available
This event-related potentials (ERPs) study attempts to trace the time course it takes to extract phonology while reading Chinese pseudocharacters. Participants were asked to passively attend to a set of pseudocharacters, each paired with a spoken syllable. This syllable had either a predicable or an unpredictable pronunciation, which was determined by the constituent phonetic radical of the pseudocharacter. The data showed that pseudocharacters paired with predictable or unpredictable pronunciations elicited different ERPs and suggested that Chinese pseudocharacters are pronounceable. Furthermore, pseudocharacters paired with unpredictable pronunciations elicited two greater frontal positivities, p2a and p2b, and an enhanced N400. P2 component could be used to index the early extraction of phonology in reading Chinese pseudocharacter; N400 was associated with the post-lexical processing. These findings suggest that phonetic radicals could be used to suggest pronunciation in the early stage of Chinese lexical processing.
Article
Full-text available
Magnetoencephalography (MEG) is a noninvasive technique for investigating neuronal activity in the living human brain. The time resolution of the method is better than 1 ms and the spatial discrimination is, under favorable circumstances, 2-3 mm for sources in the cerebral cortex. In MEG studies, the weak 10 fT-1 pT magnetic fields produced by electric currents flowing in neurons are measured with multichannel SQUID (superconducting quantum interference device) gradiometers. The sites in the cerebral cortex that are activated by a stimulus can be found from the detected magnetic-field distribution, provided that appropriate assumptions about the source render the solution of the inverse problem unique. Many interesting properties of the working human brain can be studied, including spontaneous activity and signal processing following external stimuli. For clinical purposes, determination of the locations of epileptic foci is of interest. The authors begin with a general introduction and a short discussion of the neural basis of MEG. The mathematical theory of the method is then explained in detail, followed by a thorough description of MEG instrumentation, data analysis, and practical construction of multi-SQUID devices. Finally, several MEG experiments performed in the authors' laboratory are described, covering studies of evoked responses and of spontaneous activity in both healthy and diseased brains. Many MEG studies by other groups are discussed briefly as well.
Article
Full-text available
We tested and confirmed the hypothesis that the prior presentation of nonwords in lexical decision is the net result of two opposing processes: (1) a relatively fast inhibitory process based on global familiarity; and (2) a relatively slow facilitatory process based on the retrieval of specific episodic information. In three studies, we manipulated speed-stress to influence the balance between the two processes. Experiment 1 showed item-specific improvement for repeated nonwords in a standard "respond-when-ready" lexical decision task. Experiment 2 used a 400-ms deadline procedure and showed performance for nonwords to be unaffected by up to four prior presentations. In Experiment 3 we used a signal-to-respond procedure with variable time intervals and found negative repetition priming for repeated nonwords. These results can be accounted for by dual-process models of lexical decision.
Article
Full-text available
We review the discovery, characterization, and evolving use of the N400, an event-related brain potential response linked to meaning processing. We describe the elicitation of N400s by an impressive range of stimulus types--including written, spoken, and signed words or pseudowords; drawings, photos, and videos of faces, objects, and actions; sounds; and mathematical symbols--and outline the sensitivity of N400 amplitude (as its latency is remarkably constant) to linguistic and nonlinguistic manipulations. We emphasize the effectiveness of the N400 as a dependent variable for examining almost every aspect of language processing and highlight its expanding use to probe semantic memory and to determine how the neurocognitive system dynamically and flexibly uses bottom-up and top-down information to make sense of the world. We conclude with different theories of the N400's functional significance and offer an N400-inspired reconceptualization of how meaning processing might unfold.
Article
Full-text available
Two experiments explored repetition priming effects for spoken words and pseudowords in order to investigate abstractionist and episodic accounts of spoken word recognition and repetition priming. In Experiment 1, lexical decisions were made on spoken words and pseudowords with half of the items presented twice (∼12 intervening items). Half of all repetitions were spoken in a "different voice" from the first presentations. Experiment 2 used the same procedure but with stimuli embedded in noise to slow responses. Results showed greater priming for words than for pseudowords and no effect of voice change in both normal and effortful processing conditions. Additional analyses showed that for slower participants, priming is more equivalent for words and pseudowords, suggesting episodic stimulus-response associations that suppress familiarity-based mechanisms that ordinarily enhance word priming. By relating behavioural priming to the time-course of pseudoword identification we showed that under normal listening conditions (Experiment 1) priming reflects facilitation of both perceptual and decision components, whereas in effortful listening conditions (Experiment 2) priming effects primarily reflect enhanced decision/response generation processes. Both stimulus-response associations and enhanced processing of sensory input seem to be voice independent, providing novel evidence concerning the degree of perceptual abstraction in the recognition of spoken words and pseudowords.
Article
Full-text available
Syntactic factors can rapidly affect behavioral and neural responses during language processing; however, the mechanisms that allow this rapid extraction of syntactically relevant information remain poorly understood. We addressed this issue using magnetoencephalography and found that an unexpected word category (e.g., "The recently princess . . . ") elicits enhanced activity in visual cortex as early as 120 ms after exposure, and that this activity occurs as a function of the compatibility of a word's form with the form properties associated with a predicted word category. Because no sensitivity to linguistic factors has been previously reported for words in isolation at this stage of visual analysis, we propose that predictions about upcoming syntactic categories are translated into form-based estimates, which are made available to sensory cortices. This finding may be a key component to elucidating the mechanisms that allow the extreme rapidity and efficiency of language comprehension.
Article
Full-text available
Word frequency is the most important variable in research on word processing and memory. Yet, the main criterion for selecting word frequency norms has been the availability of the measure, rather than its quality. As a result, much research is still based on the old Kucera and Francis frequency norms. By using the lexical decision times of recently published megastudies, we show how bad this measure is and what must be done to improve it. In particular, we investigated the size of the corpus, the language register on which the corpus is based, and the definition of the frequency measure. We observed that corpus size is of practical importance for small sizes (depending on the frequency of the word), but not for sizes above 16-30 million words. As for the language register, we found that frequencies based on television and film subtitles are better than frequencies based on written sources, certainly for the monosyllabic and bisyllabic words used in psycholinguistic research. Finally, we found that lemma frequencies are not superior to word form frequencies in English and that a measure of contextual diversity is better than a measure based on raw frequency of occurrence. Part of the superiority of the latter is due to the words that are frequently used as names. Assembling a new frequency norm on the basis of these considerations turned out to predict word processing times much better than did the existing norms (including Kucera & Francis and Celex). The new SUBTL frequency norms from the SUBTLEX(US) corpus are freely available for research purposes from http://brm.psychonomic-journals.org/content/supplemental, as well as from the University of Ghent and Lexique Web sites.
Article
Full-text available
Human information processing depends critically on continuous predictions about upcoming events, but the temporal convergence of expectancy-based top-down and input-driven bottom-up streams is poorly understood. We show that, during reading, event-related potentials differ between exposure to highly predictable and unpredictable words no later than 90 ms after visual input. This result suggests an extremely rapid comparison of expected and incoming visual information and gives an upper temporal bound for theories of top-down and bottom-up interactions in object recognition.
Article
Full-text available
Measuring event-related potentials (ERPs) has been fundamental to our understanding of how language is encoded in the brain. One particular ERP response, the N400 response, has been especially influential as an index of lexical and semantic processing. However, there remains a lack of consensus on the interpretation of this component. Resolving this issue has important consequences for neural models of language comprehension. Here we show that evidence bearing on where the N400 response is generated provides key insights into what it reflects. A neuroanatomical model of semantic processing is used as a guide to interpret the pattern of activated regions in functional MRI, magnetoencephalography and intracranial recordings that are associated with contextual semantic manipulations that lead to N400 effects.
Article
Full-text available
Visual word recognition studies commonly measure the orthographic similarity of words using Coltheart's orthographic neighborhood size metric (ON). Although ON reliably predicts behavioral variability in many lexical tasks, its utility is inherently limited by its relatively restrictive definition. In the present article, we introduce a new measure of orthographic similarity generated using a standard computer science metric of string similarity (Levenshtein distance). Unlike ON, the new measure-named orthographic Levenshtein distance 20 (OLD20)-incorporates comparisons between all pairs of words in the lexicon, including words of different lengths. We demonstrate that OLD20 provides significant advantages over ON in predicting both lexical decision and pronunciation performance in three large data sets. Moreover, OLD20 interacts more strongly with word frequency and shows stronger effects of neighborhood frequency than does ON. The discussion section focuses on the implications of these results for models of visual word recognition.
Article
Full-text available
Balota and Chumbley's studies led them to conclude that category verification, lexical decision, and pronunciation tasks involve combinations of processes that cause them to produce differing estimates of the relation between word frequency and ease of lexical identification. Monsell, Doyle, and Haggard challenged Balota and Chumbley's empirical evidence and conclusions, provided empirical evidence to support their challenge, and presented an alternative theoretical position. We show that Monsell et al.'s experiments, analyses, and theoretical perspective do not result in conclusions about the role of word frequency in category verification, lexical decision, and pronunciation that differ from those of Balota and Chumbley.
Article
Full-text available
Event-related brain potentials (ERPs) were recorded as subjects silently read a set of unrelated sentences. The ERP responses elicited by open-class words were sorted according to word frequency and the ordinal position of the eliciting word within its sentence. We observed a strong inverse correlation between sentence position and the amplitude of the N400 component of the ERP. In addition, we found that less frequent words were associated with larger N400s than were more frequent words, but only if the eliciting words occurred early in their respective sentences. We take this interaction between sentence position and word frequency as evidence that frequency does not play a mandatory role in word recognition, but can be superseded by the contextual constraint provided by a sentence.
Article
Full-text available
In a semantic priming paradigm, the effects of different levels of processing on the N400 were assessed by changing the task demands. In the lexical decision task, subjects had to discriminate between words and nonwords, and in the physical task, subjects had to discriminate between uppercase and lowercase letters. The proportion of related versus unrelated word pairs differed between conditions. A lexicality test on reaction times demonstrated that the physical task was performed nonlexically. Moreover, a semantic priming reaction time effect was obtained only in the lexical decision task. The level of processing clearly affected the event-related potentials. An N400 priming effect was only observed in the lexical decision task. In contrast, in the physical task a P300 effect was observed for either related or unrelated targets, depending on their frequency of occurrence. Taken together, the results indicate that an N400 priming effect is only evoked when the task performance induces the semantic aspects of words to become part of an episodic trace of the stimulus event.
Article
Full-text available
In 4 experiments, implicit and explicit memory for words and nonwords were compared. In Experiments 1-2 memory for words and legal nonwords (e.g., kers) was assessed with an identification (implicit) and a recognition (explicit) memory task: Robust priming was obtained for both words and nonwords, and the priming effects dissociated from explicit memory following a levels-of-processing manipulation (Experiment 1) and following a study-test modality shift (Experiment 2). In Experiment 3, priming for legal and illegal nonwords (e.g., xyks) was observed on an identification task, and the effects dissociated from explicit memory following a levels-of-processing manipulation. Finally, in Experiment 4, significant inhibitory priming for legal nonwords was observed when a lexical-decision task was used. Results suggest that implicit memory can extend to legal and illegal nonwords. Implications for theories of implicit memory are discussed.
Article
Full-text available
A perceptual-identification task was used to assess priming for words and pseudowords that in their upper- and lowercase formats share either few (high-shift items) or many (low-shift items) visual features. Equivalent priming was obtained for high-shift words repeated in the same case and in a different case, and this priming was greatly reduced when there was a study-test modality shift. Accordingly, the cross-case priming was mediated in large part by modality-specific perceptual codes. By contrast, priming for high-shift pseudowords was greatly reduced following the case manipulation, as was so for high-shift words when they were randomly intermixed with pseudowords. Low-shift items were not affected by the case manipulation. On the basis of the overall pattern of results, the author argues that different mechanisms mediate priming for words and pseudowords and that J. Morton (1979) was essentially correct in his characterization of word priming.
Article
Introduction High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field.
Article
This article presents a theory in which automatization is construed as the acquisition of a domain-specific knowledge base, formed of separate representations, instances, of each exposure to the task. Processing is considered automatic if it relies on retrieval of stored instances, which will occur only after practice in a consistent environment. Practice is important because it increases the amount retrieved and the speed of retrieval; consistency is important because it ensures that the retrieved instances will be useful. The theory accounts quantitatively for the power-function speed-up and predicts a power-function reduction in the standard deviation that is constrained to have the same exponent as the power function for the speed-up. The theory accounts for qualitative properties as well, explaining how some may disappear and others appear with practice. More generally, it provides an alternative to the modal view of automaticity, arguing that novice performance is limited by a lack of knowledge rather than a scarcity of resources. The focus on learning avoids many problems with the modal view that stem from its focus on resource limitations.
Article
Summary Neurones in the human inferior occipitotemporal cortex respond to specific categories of images, such as numbers, letters and faces, within 150‐200 ms. Here we identify the locus in time when stimulus-specific analysis emerges by comparing the dynamics of face and letterstring perception in the same 10 individuals. An ideal paradigm was provided by our previous study on letterstrings, in which noise-masking of stimuli revealed putative visual feature processing at 100 ms around the occipital midline followed by letter-string-specific activation at 150 ms in the left inferior occipitotemporal cortex. In the present study, noise-masking of cartoonlike faces revealed that the response at 100 ms increased linearly with the visual complexity of the images, a result that was similar for faces and letterstrings. By 150 ms, faces and letter-strings had entered their own stimulus-specific processing routes in the inferior occipitotemporal cortex, with identical timing and large spatial overlap. However, letter-string analysis lateralized to the left hemisphere, whereas face processing occurred more bilaterally or with righthemisphere preponderance. The inferior occipitotemporal activations at ~150 ms, which take place after the visual feature analysis at ~100 ms, are likely to represent a general object-level analysis stage that acts as a rapid gateway to higher cognitive processing.
Article
THE average duration of eye fixations in reading places constraints on the time for lexical processing. Data from event related potential (ERP) studies of word recognition can illuminate stages of processing within a single fixation on a word. In the present study, high and low frequency regular and exception words were used as targets in an eye movement reading experiment and a high-density electrode ERP lexical decision experiment. Effects of lexicality (word vs pseudoword vs consonant strings), word frequency (high vs low frequency) and word regularity (regular vs exception spelling-sound correspondence) were examined. Results suggest a very early time-course for these aspects of lexical processing within the context of a single eye fixation.
Article
This paper reviews research relevant to the question of whether words are identified through the use of abstract lexical representations, specific episodic representations, or both. Several lines of evidence indicate that specific episodes participate in word identification. First, pure abstractionist theories can explain short-term but not long-term repetition priming. Second, long-term repetition priming is sensitive to changes in surface features or episodic context between presentations of a word. Finally, long-term priming for pseudowords is also difficult for pure abstractionist theories to explain. Alternative approaches to word identification are discussed, including both pure episodic theories and theories in which both episodes and abstract representations play a role.
Article
Word repetition has been a staple paradigm for both psycholinguistic and memory research; several possible loci for changes in behavioral performance have been proposed. These proposals are discussed in light of the event-related brain potential (ERP) data reported here. ERPs were recorded as subjects read nonfiction articles drawn from a popular magazine. The effects of word repetition were examined in this relatively natural context wherein words were repeated as a consequence of normal discourse structure. Three distinct components of the ERP were found to be sensitive to repetition: a positive component peaking at 200 msec poststimulus, a negative one at 400 msec (N400), and a later positivity. The components were differentially sensitive to the temporal lag between repetitions, the number of repetitions, and the normative frequency of the eliciting word. The N400 responded similarly to repetition in text as it has in experimental lists of words, but the late positivity showed a different pattern of results than in list studies.
Article
Abstract The effects on event-related potentials (ERPs) of within- and across-modality repetition of words and nonwords were investigated. In Experiment 1, subjects detected occasional animal names embedded in a series of words. AU items were equally likely to be presented auditorily or visually. Some words were repetitions, either within- or across-modality, of words presented six items previously. Visual-visual repetition evoked a sustained positive shift, which onset around 250 msec and comprised two topographically and temporally distinct components. Auditory-visual repetition modulated only the later of these two components. For auditory EMS, within- and across-modality repetition evoked effects with similar onset latencies. The within-modality effect was initially the larger, but only at posterior sites. In Experiment 2, critical items were auditory and visual nonwords, and target items were auditory words and visual pseudohomophones. Visual-visual nonword repetition effects onset around 450 msec, and demonstrated a more anterior scalp distribution than those evoked by auditory-visual repetition. Visual-auditory repetition evoked only a small, late-onsetting effect, whereas auditory-auditory repetition evoked an effect that, at parietal sites only, was almost equivalent to that from the analogous condition of Experiment 1. These findings indicate that, as indexed by ERF's, repetition effects both within- and across-modality are influenced by lexical status. Possible parallels with the effects of word and nonword repetition on behavioral variables are discussed.
Article
Two experiments investigated the modulation of event-related potentials (ERPs) by semantic priming and item repetition. In Experiment 1, subjects silently counted occasional non-words against a background of words, a proportion of which were either semantic associates or repetitions of a preceding word. Compared to control items, ERPs to repeated words were distinguished by an early (ca. 200 msec) transient negative-going deflection and a later, topographically widespread and temporally sustained positive-going shift. In contrast, semantically primed words showed a relatively small, topographically and temporally limited positive-going modulation peaking around 500 msec. These data were interpreted as evidence against models of priming and repetition which postulate similar loci for these effects. In Experiment 2, subjects counted occasional words against a background of non-words, some of which were repeated. ERPs to repetitions showed a similar early ERP modulation to that in Experiment 1, and also displayed a later slow positive shift. This latter effect was smaller in magnitude and had a delayed onset in comparison to Experiment 1. It was concluded that the effects of repetition differ as a consequence of whether, prior to their first presentation, items possess a representation in lexical memory.
Article
This study used event-related brain potentials and performance to trace changes in the underlying brain circuitry of undergraduates who spent 5 weeks learning a miniature artificial language. A reaction time task involving visual matching showed that words in the new language were processed like nonsense material before training, and like English words at the end of the 5 weeks of training. Scalp electrical recordings were used to explore the underlying basis for the change due to learning. Results of the ERPs were consistent with brain imaging studies showing posterior areas related to visual orthography and more widespread left lateral frontal and temporal areas related to semantic access. A posterior component at about 200 ms proved sensitive to differences in the orthography but did not change over the course of 5 weeks of training. A later ERP component at about 300 ms was sensitive to semantic task demands and underwent changes over the 5 weeks that were congruent with training-related changes observed in subjects’ matching task performance.
Article
This study compared and contrasted semantic priming in the visual and auditory modalities using event-related brain potentials (ERPs) and behavioural measures (errors and reaction time). Subjects participated in two runs (one visual, one auditory) of a lexical decision task where stimuli were word pairs consisting of “prime” words followed by equal numbers of words semantically related to the primes, words unrelated to the primes, pseudo-words, and nonwords. Subjects made slower responses, made more errors, and their ERPs had larger negative components (N400) to unrelated words than to related words in both modalities. However, the ERP priming effect began earlier, was larger in size, and lasted longer in the auditory modality than in the visual modality. In addition, the lateral distribution of N400 over the scalp differed in the two modalities. It is suggested that there may be overlap in the priming processes that occur in each modality but that these processes are not identical. The results also demonstrated that the N400 component may be specifically responsive to language or potential language events.
Article
This study examined the extent to which adultdyslexic readers exhibit concurrent deficitsfor phonological, orthographic and cross-modalword representations, and the relationshipbetween these deficits and decoding ability.Participants were 18 phonological dyslexics and19 normal readers at college level. Compared tonormal readers, dyslexics exhibitedsignificantly slower reaction times acrosstasks, and were less accurate on the unimodalorthographic task. Word pattern processing wasmore extensively related to decoding abilityamong dyslexic as compared to normal readers,but more robustly related to baseline measuresof phonological and orthographic processingamong normal readers. The results are discussedin the context of integrating the phonologicaland orthographic aspects of words, speed ofprocessing deficits, and the importance of taskselection when assessing adult dyslexicpopulations.
Article
Stimulus repetition improves performance and modulates event-related brain potentials in word recognition tasks. We recorded evoked magnetic responses from bilateral temporal sites of the brain to determine the cortical area related to the word repetition effect. Fourteen Japanese volunteers read words or pronounceable nonwords, some of which occurred twice with a lag of eight items. Clear magnetic responses were observed bilaterally. In the left hemisphere, a reduction of the magnetic responses by repetition was observed for words but not for nonwords in the latency range of 300–500 ms poststimulus. The sources of the responses were estimated to be in the left perisylvian area adjacent to the auditory cortex and the left parietal area. Only the perisylvian source activity showed the reduction by the word repetition. The left perisylvian area was thus suggested to be related to the word repetition effect. The activity in this area might be associated with the lexical memory process.
Article
This paper provides an introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects. A worked-out example of how to use recent software for mixed-effects modeling is provided. Simulation studies illustrate the advantages offered by mixed-effects analyses compared to traditional analyses based on quasi-F tests, by-subjects analyses, combined by-subjects and by-items analyses, and random regression. Applications and possibilities across a range of domains of inquiry are discussed.
Article
The electrophysiological response to words during the 'N400' time window (approximately 300-500 ms post-onset) is affected by the context in which the word is presented, but whether this effect reflects the impact of context on access of the stored lexical information itself or, alternatively, post-access integration processes is still an open question with substantive theoretical consequences. One challenge for integration accounts is that contexts that seem to require different levels of integration for incoming words (i.e., sentence frames vs. prime words) have similar effects on the N400 component measured in ERP. In this study we compare the effects of these different context types directly, in a within-subject design using MEG, which provides a better opportunity for identifying topographical differences between electrophysiological components, due to the minimal spatial distortion of the MEG signal. We find a qualitatively similar contextual effect for both sentence frame and prime-word contexts, although the effect is smaller in magnitude for shorter word prime contexts. Additionally, we observe no difference in response amplitude between sentence endings that are explicitly incongruent and target words that are simply part of an unrelated pair. These results suggest that the N400 effect does not reflect semantic integration difficulty. Rather, the data are consistent with an account in which N400 reduction reflects facilitated access of lexical information.
Article
To determine the time and location of lexico-semantic access, we measured neural activations by magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) and estimated the neural sources by fMRI-assisted MEG multidipole analysis. Since the activations for phonological processing and lexico-semantic access were reported to overlap in many brain areas, we compared the activations in lexical and phonological decision tasks. The former task required visual form processing, phonological processing, and lexico-semantic access, while the latter task required only visual form and phonological processing, with similar phonological task demands for both tasks. The activation areas observed among 9 or 10 subjects out of 10 were the superior temporal and inferior parietal areas, anterior temporal area, and inferior frontal area of both hemispheres, and the left ventral occipitotemporal area. The activations showed a significant difference between the 2 tasks in the left anterior temporal area in all 50-ms time windows between 200-400 ms from the onset of visual stimulus presentation. Previous studies on semantic dementia and neuroimaging studies on normal subjects have shown that this area plays a key role in accessing semantic knowledge. The difference between the tasks appeared in common to all areas in the time windows of 100-150 ms and 400-450 ms, suggesting early differences in visual form processing and late differences in the decision process, respectively. The present results demonstrate that the activations for lexico-semantic access in the left anterior temporal area start in the time window of 200-250 ms, after early visual form processing.
Article
The theoretical basis for magnetic field recording (MEG) methods is briefly summarized. Lines of constant radial magnetic field on a spherical surface, which are typically used in MEG applications to locate sources, are shown for various multiple dipole sources. It is shown that the usual localization methods are subject to relatively large error if only one additional dipole is present. New methods to improve spatial resolution are discussed.
Article
A method is described for computing a Z estimator for the quantitative comparison of topographical patterns in 2 maps. Z can assume values between 1 and -1. The procedure is illustrated on a 3-shell head model for different configurations of current dipoles in the head space. The sensitivity of Z estimation can be adjusted by different weighting procedures or by using an average reference. When Z = 1 indicates identity of the set of active dipoles in the 2 maps compared, a dilation factor can be computed to estimate the enhanced or reduced activity of these generators.
Article
The modulation of event-related potentials by word repetition was investigated in two experiments. In both experiments, subjects responded to occasional nonwords interspersed among a series of words. A proportion of the words were repetitions of previously presented items. Words were repeated after 0 or 6 intervening items in Experiment 1 and after 6 or 19 items in Experiment 2. Event-related potentials to repeated words were characterised by a sustained, widespread positive-going shift with an onset of approximately 300 ms. This effect did not vary significantly as a function of lag in either experiment. When words were repeated immediately, this repetition-evoked positive shift was preceded by a transient negative deflection (onset ca. 200 ms) which was absent in event-related potentials to words repeated at longer lags. These results suggest that the modulation of event-related potentials by word repetition is influenced by at least two processes. One of these processes acts relatively early during the processing of a repeated word, but subsides rapidly as inter-item lag between first and second presentations increases. The second process occurs later in time, but is considerably more robust over variations in inter-item lag.
Article
Event-related brain potentials (ERPs) and behavioral measures (reaction time and percentage errors) were measured in a semantic priming lexical decision task. In one block of trials, instructions and the proportion of related word pairs were designed to influence subjects to process the first member of each pair (prime) automatically. In another block, subjects were induced to attend to the meaning of each prime. ERPs to the primes were more positive between 200 and 600 msec and more negative between 750 and 1150 msec when subjects attended to the primes as opposed to when only automatic processing was required. Target word ERP activity between 200 and 525 msec (N400) was more negative in the neutral than in the semantically related condition in both blocks of trials, but more so in the attentional block, while a late ERP positively between 525 and 1100 msec (Slow Wave) was more positive in the unrelated than the neutral condition, but only in the attentional block. The results are discussed in terms of the two-process model proposed by Posner and Snyder (1975a, 1975b).
Article
Two experiments investigated the modulation of event-related potentials (ERPs) by the repetition of orthographically legal and illegal nonwords. In Experiment 1, subjects silently counted occasional words against a background of nonwords, a proportion of which were repetitions of an immediately preceding legal or illegal item. ERPs to repeated legal items showed a sustained, topographically diffuse, positive-going shift. In contrast, repeated illegal nonwords gave rise to ERPs showing a smaller and temporally more restricted positive-going modulation. In an attempt to equalize depth of processing across legal and illegal nonwords, subjects in Experiment 2 were required to count items containing a nonalphabetic character against the same background of nonword items. ERPs to repeated legal items showed a modulation similar to, although smaller than, that found in Experiment I, but no effects of repetition were observed in the ERPs to the illegal nonwords. It was concluded that the effects of repeating nonwords, at least as manifested in concurrently recorded ERPs, differ as a consequence of whether items can access lexical memory, and that this is inconsistent with the attribution of such effects solely to the operation of episodic memory processes.
Article
Three experiments investigated the impact of five lexical variables (instance dominance, category dominance, word frequency, word length in letters, and word length in syllables) on performance in three different tasks involving word recognition: category verification, lexical decision, and pronunciation. Although the same set of words was used in each task, the relationship of the lexical variables to reaction time varied significantly with the task within which the words were embedded. In particular, the effect of word frequency was minimal in the category verification task, whereas it was significantly larger in the pronunciation task and significantly larger yet in the lexical decision task. It is argued that decision processes having little to do with lexical access accentuate the word-frequency effect in the lexical decision task and that results from this task have questionable value in testing the assumption that word frequency orders the lexicon, thereby affecting time to access the mental lexicon. A simple two-stage model is outlined to account for the role of word frequency and other variables in lexical decision. The model is applied to the results of the reported experiments and some of the most important findings in other studies of lexical decision and pronunciation.
Article
When subjects read an semantically unexpected word, the brain electrical activity shows a negative deflection at about 400 msec in comparison with the response to an expected word. In order to study the brain systems related to this effect we mapped it with a dense (64-channel) electrode array and two reference-independent measures, one estimating the average potential gradients and the other radial current density. With these measures, the event-related brain potential (ERP) begins at about 70 msec with the P1, reflecting bilateral current sources over occipitoparietal areas. A strongly left-lateralized N1 then follows, peaking at about 180 msec, accompanied by an anterior positivity, the P2. A separate posterior positive pattern then emerges that seems to repeat the topography of the P1. Next, at about 350 msec, the ERP for the congruous word develops a P300 or LPC, characterized by a diffuse positivity over the superior surface of the head and several negativities over inferior regions. This superior source/inferior sink pattern of the LPC is greater over the left hemisphere. In contrast, the ERP for the incongruous word in this interval displays the N400 as a period in which topographic features are absent. At about 400 msec the ERP for the incongruous word begins to develop an LPC, which then remains relatively symmetric over the two hemispheres.
Article
Statistical methods for testing differences between neural images (e.g., PET, MRI or EEG maps) are problematic because they require (1) an untenable assumption of data sphericity and (2) a high subject to electrode ratio. We propose and demonstrate an exact and distribution-free method of significance testing which avoids the sphericity assumption and may be computed for any combination of electrode and subject numbers. While this procedure is rigorously rooted in permutation test theory, it is intuitively comprehensible. The sensitivity of the permutation test to graded changes in dipole location for systematically varying levels of signal/noise ratio, intersubject variability and number of subjects was demonstrated through a simulation of 70 different conditions, generating 5,000 different data sets for each condition. Data sets were simulated from a homogeneous single-shell dipole model. For noise levels commonly encountered in evoked potential studies and for situations where the number of subjects was less than the number of electrodes, the permutation test was very sensitive to a change in dipole location of less than 0.75 cm. This method is especially sensitive to localized changes that would be "washed-out" by more traditional methods of analysis. It is superior to all previous methods of statistical analysis for comparing topographical maps, because the test is exact, there is no assumption of a multivariate normal distribution or of the correlation structure of the data requiring correction, the test can be tailored to the specific experimental hypotheses rather than allowing the statistical tests to limit the experimental design, and there is no limitation on the number of electrodes that can be simultaneously analyzed.
Article
A method is described to compare two evoked potential scalp fields in order to decide if the two fields are the same or different. The method uses Efron's bootstrap technique which avoids potential errors due to assumptions about the underlying stochastic process. It is configured to focus only on the shape of the evoked potential scalp field. The method is applied to a simple visually evoked potential paradigm and results are compared to the chi-square test using data from 7 normal subjects.
Article
The average duration of eye fixations in reading places constraints on the time for lexical processing. Data from event related potential (ERP) studies of word recognition can illuminate stages of processing within a single fixation on a word. In the present study, high and low frequency regular and exception words were used as targets in an eye movement reading experiment and a high-density electrode ERP lexical decision experiment. Effects of lexicality (words vs pseudowords vs consonant strings), word frequency (high vs low frequency) and word regularity (regular vs exception spelling-sound correspondence) were examined. Results suggest a very early time-course for these aspects of lexical processing within the context of a single eye fixation.