Article

Processes in word recognition

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

Five hypotheses were proposed and tested to account for Reicher's (1968) finding that recognition of letters is more accurate in the context of a meaningful word than alone, even with redundancy controlled by a forced-choice design. All five hypotheses were rejected on the basis of the experimental results. Performance on the forced-choice letter detection task averaged 10% better when the stimuli were four-letter English words than when the stimuli were single letters appearing alone in the visual field.Three classes of models were proposed to account for the experimental results. All three are based on analysis of the task in terms of the extraction of features from the stimuli.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Bar, 2004;Biederman, 1981;Davenport & Potter, 2004). Experimentally superiority effects and have been found with words (Cattell, 1886;Reicher, 1969;Wheeler, 1970), faces (Homa, Haver, & Schwartz, 1976), objects (Weisstein & Harris, 1974) and scenes (Biederman, 1972). ...
... However, these results could have demonstrated improved ability in remembering letters from the English words rather than ability in identifying them. Reicher (1969) and Wheeler (1970) used the twoalternative forced choice (2AFC) paradigm to address this issue methodologically. ...
... In conclusion there is strong supporting evidence for each of the above superiority effects. Effects have been found using empirical methods based upon concepts devised by Reicher (1969) and Wheeler (1970) to eliminate response bias. It is this ability to isolate the influence of context alone, and its effect on guessing, that makes clear the interactions of context with the perception of individual components. ...
Thesis
p>The thesis explores how non-target objects influence object recognition. In all five experiments, sets of non-target objects are used to generate ‘scene’ contexts and these are presented so that they surround individual target objects. The foci of investigation are (1) whether scene context effects with multiple objects exist, (2) if they exist are they perceptual or due to response biases, (3) what role does the distribution of attention play in the generation of scene context effects, and (4) what is the time-course of their generation? Experiments 1-3 found that target objects were named more accurately when non-target objects were semantically related (context-consistent) than semantically unrelated (context-inconsistent). However the magnitude of the context effect was mediated by visual attention. A significant effect was only achieved when all objects (targets and non-targets) were within an attended region and not when non-targets fell outside of this region. Experiment 4 used a paradigm conceptually related to the Reicher-Wheeler paradigm to provide a measure of response bias. A six-alternative forced-choice response design demonstrated a significant influence of scene context even after the data were corrected for response bias; suggesting a perceptual/representational locus to the scene context effect generated by non-target objects on target objects. Experiments 4 and 5 also manipulated the time-course of the onset of non-target objects relative to target objects. The results showed that at least 52msec was required for the presence of non-target objects to influence recognition of target objects. In other words, the scene context effect for multiple non-target objects requires at least 52msec to accumulate. In summary, scene context effects for multiple non-target objects on target objects directly influence the representational processes of target recognition. Furthermore their magnitude is dependent on the distribution of attention across the visual field and the temporal relationship of non-targets and targets. How these factors influence the modelling of object recognition is also considered.</p
... Consequently, the contexts in which letters appear can significantly alter readers' ability to discriminate between them. Readers identify letters more accurately when they appear in a real word compared to a pseudoword (Coch & Mitra, 2010;Grainger & Jacobs, 1994;Kezilas et al., 2016;Reicher, 1969;Wheeler, 1970). This word superiority effect is understood as evidence that word representations enrich letter identification processes (Grainger & Jacobs, 1996;McClelland & Rumelhart, 1981;Rumelhart & McClelland, 1982). ...
... We predicted that readers would be less accurate at discriminating between two letters with high visual feature overlap (m-n) relative to two letters with low visual overlap (m-t). We also predicted that letter identification would be more accurate in words relative to pseudowords, and pseudowords relative to unpronounceable consonant strings, in line with word (Reicher, 1969;Wheeler, 1970) and pseudoword superiority effects (Baron & Thurston, 1973;Carr et al., 1978). Finally, we predicted that letter confusability from visual similarity would be reduced when letter-strings aligned with orthographic and orthotactic knowledge, as we proposed that readers would use their knowledge of words and legal letter combinations to narrow down plausible letter candidates. ...
... Our results revealed effects of orthographic context and visual feature similarity on letter discrimination accuracy in a Reicher-Wheeler task. Performance improved as letter strings became more word-like (words > pseudowords > consonant strings), replicating the word superiority effect and the pseudoword superiority effect (Baron & Thurston, 1973;Carr et al., 1978;Reicher, 1969;Wheeler, 1970). Performance was also superior when the discrimination involved letters with low visual similarity compared to letters with high visual similarity. ...
Article
Full-text available
Word recognition is facilitated by primes containing visually similar letters (dentjst-dentist, Marcet & Perea, 2017), suggesting that letter identities are encoded with initial uncertainty. Orthographic knowledge also guides letter identification, as readers are more accurate at identifying letters in words compared to pseudowords (Reicher, 1969; Wheeler, 1970). We investigated how higher-level orthographic knowledge and low-level visual feature analysis operate in combination during letter identification. We conducted a Reicher-Wheeler task to compare readers’ ability to discriminate between visually similar and dissimilar letters across different orthographic contexts (words, pseudowords, and consonant strings). Orthographic context and visual similarity had independent effects on letter identification, and there was no interaction between these factors. The magnitude of these effects indicated that higher-level orthographic information plays a greater role than lower-level visual feature information in letter identification. We propose that readers use orthographic knowledge to refine potential letter candidates while visual feature information is accumulated. This combination of higher-level knowledge and low-level feature analysis may be essential in permitting the flexibility required to identify visual variations of the same letter (e.g. N-n) whilst maintaining enough precision to tell visually similar letters apart (e.g. n-h). These results provide new insights on the integration of visual and linguistic information and highlight the need for greater integration between models of reading and visual processing. This study was pre-registered on the Open Science Framework. Pre-registration, stimuli, instructions, trial-level data, and analysis scripts are openly available ( https://osf.io/p4q9u/ ).
... These solutions suggest relations to several theoretical studies related to visualization, general RC and word recognition (WR). In theory, WR skills are important elements of achieving RC [13][14], where word superiority effect (WSE) [15][16][17][18] and word frequency effect (WFE) [19][20] help to solve the phenomenon of reading without comprehension. The WSE shows that a letter is easier to be recognize in known words compared to non-words; while the WFE shows that more frequent words are responded to more rapidly. ...
... In between the top and bottom levels, the process towards achieving RC involving word recognition or WR, known as the word level process, is considered important [13]. The WSE [15][16][17][18] and WFE [19][20] are the two most well-established theories related to achieving WR. ...
... Similarly, Tozcu & James [5] found that learning frequent words significantly affects reading and word recognition in the treatment group. These findings are consistent with those from other studies, such as Perfetti, Landi and Oakhill [13], Rayner, Schotter, Masson, Potter and Treiman [14], Reicher [16], Wheeler [17], McClelland and Johnson [18] and Forster and Chambers [20]. ...
Article
Comprehending text is the aim of reading; however, there is the phenomenon of non-Arabic speakers in Malaysia reading the Qur’an, written in Arabic, without comprehension. Word recognition (WR) theory, through word frequency effect (WFE) and word superiority effect (WSE), are used as a basis to achieve reading comprehension (RC) of the Qur’an. The Eye of Qur’an (EoQu) interface was developed, to visualise word occurrences and word morphology. This is achieved through parallel plot and word segmentation visualization. EoQu can track a user’s personal vocabulary with a presentation of percentage and word position in the Qur’an. Consequently, users know their ability to recognize Arabic words in relation to the whole Qur’an to achieve RC. An experimental study was set up with 90 Malaysian participants, starting with a pre-test, followed by a stratified sampling to divide participants into control and experimental groups (who used EoQu) for the post-test. Results showed evidence of improvement in WR based on scores and time taken to complete the Arabic Word Recognition Test.
... For faces, the effect has been typically shown in terms of better performance for parts shown in a whole face than when presented in isolation (Tanaka & Farah, 1993). A similar task for words can be found in the classic Reicher-Wheeler paradigm (Reicher, 1969;Wheeler, 1970). The word superiority effect refers to the better recognition of a target letter presented within a word than alone or a nonword (Reicher, 1969;Wheeler, 1970). ...
... A similar task for words can be found in the classic Reicher-Wheeler paradigm (Reicher, 1969;Wheeler, 1970). The word superiority effect refers to the better recognition of a target letter presented within a word than alone or a nonword (Reicher, 1969;Wheeler, 1970). This word superiority effect is regarded as the result of the interaction between whole-word lexical representations (top-down influences) and low-level bottom-up processing at the letter level (e.g., McClelland & Rumelhart, 1981;Rumelhart & McClelland, 1982). ...
Article
Full-text available
Holistic processing aids in the discrimination of visually similar objects, but it may also come with a cost. Indeed holistic processing may improve the ability to detect changes to a face while impairing the ability to locate where the changes occur. We investigated the capacity to detect the occurrence of a change versus the capacity to detect the localization of a change for faces, houses, and words. Change detection was better than change localization for faces. Change localization outperformed change detection for houses. For words, there was no difference between detection and localization. We know from previous studies that words are processed holistically. However, being an object of visual expertise processed holistically, visual words are also a linguistic entity. Previously, the word composite effect was found for phonologically consistent words but not for phonologically inconsistent words. Being an object of visual expertise for which linguistic information is important, letter position information, is also crucial. Thus, the importance of localization of letters and features may augment the capacity to localize a change in words making the detection of a change and the detection of localization of a change equivalent.
... The cognitive mechanisms involved in single-letter identification within words are paramount to attaining the necessary high level of automaticity in reading (Marzouki and Grainger, 2014). Related to this complex skill is the so-called word "superiority effect" phenomenon, first reported by Reicher (1969) and Wheeler (1970). These authors show that a letter embedded in a word was identified more accurately than the same letter embedded in a pseudo-word or non-word. ...
... Interestingly, this study also shows that the accuracy with which a letter is identified is higher when the latter is embedded in an orthographically legal string of letters, that is, in words and pseudo-words than when it appears in orthographically illegal strings of letters, that is, in non-words. Similarly, Wheeler (1970) further confirmed the robustness of the word superiority effect by using what has become to be known as the "Reicher-Wheeler" task, which differs from Reicher's (1969) initial experiment in that it controls for the serial position, the word-probe delay, and word frequency. The word superiority effect has been often interpreted as strong evidence of the presence of top-down modulation originating from the mental lexicon of the lower levels of visual word form recognition (Marchetti and Mewhort, 1986). ...
Article
Full-text available
In this study, we examined the word superiority effect in Arabic and English, two languages with significantly different morphological and writing systems. Thirty-two Arabic–English bilingual speakers performed a post-cued letter-in-string identification task in words, pseudo-words, and non-words. The results established the presence of the word superiority effect in Arabic and a robust effect of context in both languages. However, they revealed that, compared to the non-word context, word and pseudo-word contexts facilitated letter identification more in Arabic than in English. In addition, the difference between word and pseudo-word contexts was smaller in Arabic compared to English. Finally, there was a consistent first-letter advantage in English regardless of the context, while this was more consistent only in the word and pseudo-word contexts in Arabic. We discuss these results in light of previous findings and argue that the differences between the patterns reported for Arabic and English are due to the qualitative difference between word morphophonological representations in the two languages.
... Readers' lexical knowledge may also be used to constrain the likely identities of individual words, as demonstrated by the word-superiority effect, or the paradoxical finding that letters in words can be identified more accurately than letters displayed in isolation (McClelland & Rumelhart, 1981;Reicher, 1969;Rumelhart & McClelland, 1982;Wheeler, 1970). This phenomenon suggests that one's knowledge of words can, for example, allow the partially degraded letter string leopar-to be identified as the word leopard even though the word's last letter is not visible. ...
... One alternative to this assumption, for example, would be that the time required to identify a word is determined by the time needed to identify all of its constituent letters, with the wordprocessing rate thus equalling the slowest letter-processing rate. This alternative would, however, be at odds with the word-superiority effect (Reicher, 1969;Wheeler, 1970). As instantiated by Equations 4-6, the model provides a partial account of this effect 13 . ...
Article
Full-text available
Word identification is slower and less accurate outside central vision, but the precise relationship between retinal eccentricity and lexical processing is not well specified by models of either word identification or reading. In a seminal eye-movement study, Rayner and Morrison (1981) found that participants made remarkably accurate naming and lexical-decision responses to words displayed more than three degrees from the center of vision—even under conditions requiring fixed gaze. However, the validity of these findings is challenged by a range of methodological limitations. We report a series of gaze-contingent lexical-decision and naming experiments that replicate and extend Rayner and Morrison’s study to provide a more accurate estimate of how visual constraints delimit lexical processing. Simulations were conducted using the E-Z Reader model (Reichle et al., 2012) to assess the implications for understanding eye-movement control during reading. Augmenting the model’s assumptions about the impact of both eccentricity and visual crowding on the rate of lexical processing provided good fits to the observed data without impairing the model’s ability to simulate benchmark eye-movement effects. The findings are discussed with a view towards the development of a complete model of reading.
... This choice was made in order to design a homogeneous set of stimuli, supposedly known by all participants. We used a paradigm from the reading literature that has revealed a word-superiority effect (WSE, Reicher, 1969;Wheeler, 1970). The WSE initially showed faster RT and better accuracy to identify a letter in a two-alternative-forced-choice (2AFC) task when it belongs to a word than when the letter is presented in isolation, suggesting an advantage due to lexical activation. ...
... Evidence for such an encyclopedic number lexicon was until now lacking, as previous findings could be explained by explicit semantic elaboration and processing (Gullick & Temple, 2011), or by activation of semantic and verbal associates (Alameda et al., 2003). Here, testing historians with high knowledge of dates, we compared the processing of dates to unknown numbers with the Reicher-Wheeler paradigm borrowed from the reading literature (word superiority effect, WSE, Reicher, 1969;Wheeler, 1970). ...
Article
Full-text available
Neuropsychological case-studies suggested that dates and encyclopedic numbers may be processed differently than unknown numbers. However, this issue was seldom investigated in healthy participants. Therefore, it is unclear whether known dates are read like words (as lexical items), or like numbers (each position strictly defines digits’ values in a base-10 system). Here, we compared dates to unknown numbers in an experiment using a paradigm from the word recognition literature. We assessed the word-superiority effect by testing experts (students/ teachers in History) with dates. A 4-characters stimulus (xxxx; letters or numbers, half known/unknown) was presented centrally, masked, and followed by 2 characters above and below the mask, at position 2 (xXxx) or 3 (xxXx) in an alternative-forced-choice recognition task. Both accuracy and reaction times were better for dates than unknown numbers, similarly to the results obtained with words by comparison to non-words. However, this effect was modulated by position in the string. These results show a “date-superiority effect” revealing that dates are processed differently than unknown numbers, and suggest that similar orthographical mechanisms might be used to process dates and words.
... A further aspect contributing to higher cohesiveness of words might be feedback from lexical levels to early perceptual ones. There is evidence suggesting that wholeobject level representations facilitate early individual part processing, such as the findings on the part-whole effect for words (i.e., better letter identification in strings when the string is a word vs. a pseudoword or a nonword; Reicher, 1969;Wheeler, 1970). The advantage of the word context has been interpreted as a top-down influence of whole-word representations at the orthographic level on the letter identification level (McClelland & Rumelhart, 1981). ...
... Second, feedback from lexical processing to early perceptual processing contributes to word cohesiveness (Reicher, 1969;Wheeler, 1970). Finally, words have phonology and semantics that might help the perceived cohesiveness of words through re-entrant feedback to the orthographic level both directly and indirectly (Seidenberg & McClelland, 1989). ...
Article
Full-text available
A dual-route account of holistic processing has been proposed, which includes a stimulus-based and experience-based approach to holistic processing. The bottom-up route was suggested by the observation of holistic processing for novel Gestalt line patterns in the absence of expertise. For words, there is mainly evidence for a late, lexical, experience-based locus of holistic processing with scarce evidence for an early, stimulus-based locus. However, salient early Gestalt information (i.e., connectedness, closure, and continuity between parts) are important for letter and word identification. Thus, there might be an overlap at an early, perceptual processing stage, between Gestalt stimulus-based holistic processing and word holistic processing. In the task we used, words and Gestalt line patterns were superimposed, and we evaluated whether one class of stimuli was processed less holistically when an aligned other class pattern (processed holistically) was superimposed. There was some evidence supporting an early locus for the influence of word processing on Gestalt line patterns, but the interaction between the two stimuli was not reciprocal, which needs further clarification. When an aligned word (processed holistically) was overlaid on a line pattern, the line pattern was processed less holistically. However, when an aligned line pattern (processed holistically) was overlaid on a word, the word was not processed less holistically. This pattern might result from the higher cohesiveness of words and their automaticity and feedback from the lexicon.
... Because it seems so intuitively reasonable and perhaps because experimental research shifted from the mental to the behavioral, Cattell's explanation (and Huey's) stood unchallenged until the independent publications of experiments by Reicher (1969) and Wheeler (1970). Cattell's conclusion that words are perceived as wholes might be correct, but his experiments could not support this conclusion. ...
... Remembering enough letters would prompt retrieval of a word that contains them, making the report of the letter string a mix of perception, memory, and a bias to respond with words. Reicher (1969) and Wheeler (1970) controlled for response bias by asking participants which of two letters had been briefly presented (and masked) in a particular position. For example, given the string lake, probing whether "k" or "t" had appeared in the 3 rd position would not favor a word response because either letter completes a word. ...
Chapter
In this chapter, the authors highlight advances in the study of skilled reading, from word identification to comprehension, emphasizing language and writing system influences, the convergence of brain and behavior data, with brief links to reading difficulties and learning to read. They begin by replacing their metaphor of stream currents with a static representation of what reading science seeks to explain, drawing on the Reading Systems Framework. The authors apply the framework to examine research progress, describing three significant advances. These include: the word‐identification system in skilled alphabetic reading; comprehending while reading; and toward a more universal science of reading. Moving beyond alphabetic writing toward a more universal perspective, orthographic depth was extended to nonalphabetic writing, for example, the consonant‐based Abjad system and morpho‐syllabic Chinese. Comparative research has stimulated the extension of models of alphabetic reading to nonalphabetic reading.
... b Sequence of events in an upright trial and an inverted trial Fig. 3 The part-whole task with Portuguese words face than when presented in isolation (Tanaka & Farah, 1993), although a similar advantage has also been shown for parts shown in an intact face than in a new face configuration (Tanaka & Sengco, 1997). A similar task for words can be found in the classic Reicher-Wheeler paradigm (Reicher, 1969;Wheeler, 1970). Observers viewed a briefly presented letter string that is subsequently masked and had to identify the letter at a specific location in the string. ...
... Yet apart from isolated parts, new face configurations formed by modifying spacing between parts have also been used in the control condition, showing the same effect (reviewed in Tanaka & Simonyi, 2016). Therefore, the use of isolated parts in the control condition is not a must for showing the effect, and we opted to follow the procedure of the classical paradigm showing the part-whole task (Reicher, 1969;Wheeler, 1970). ...
Article
Full-text available
Recently, paradigms in the face recognition literature have been adopted to reveal holistic processing in word recognition. It is unknown, however, whether different measures of holistic word processing share similar underlying mechanisms, and whether fluent word reading relies on holistic word processing. We measured holistic processing effects in three paradigms (composite, configural sensitivity, part-whole) as well as in reading fluency (3DM task: reading aloud high- and low-frequency words and pseudowords). Bin scores were used to combine accuracy and response time variables in the quest for a more comprehensive, reliable, and valid measure of holistic processing. Weak correlations were found between the different holistic processing measures, with only a significant correlation between the configural sensitivity effect and part-whole effect (r = .32) and a trend of a positive correlation between the word composite effect and configural sensitivity effect (r = .21). Of the three holistic processing measures, only one (part-whole effect) correlated with a lexical access measure of 3DM (r = .23). We also performed a principal component analysis (PCA) of performance in the three lists of 3DM, with the second most probably reflecting lexical access processes. There was a tendency for a positive correlation between part-whole bin measure and Component 2 of PCA. We also found a positive correlation between composite aligned in accuracy and Component 2 of PCA.Our results show that different measures of holistic word processing reflect predominantly different mechanisms, and that differences among normal readers in word reading do not seem to depend highly on holistic processing.
... C'est ce principe que nous avons retenu dans nos recherches pour évaluer les traitements en fovéa et en parafovéa chez des apprentis lecteurs et des enfants dyslexiques. Deux paradigmes d'étude du traitement du mot ont été utilisés : le paradigme de position variable de fixation et le paradigme de Reicher (1969) et Wheeler (1970. ...
... Il y a maintenant près de 35 ans que Reicher (1969) et Wheeler (1970 publièrent chacun un travail détaillant les influences descendantes des connaissances lexicales sur la perception des lettres dans les mots. La tâche de Reicher-Wheeler consiste à présenter un mot (e.g., POIRE) très rapidement (de l'ordre de 50 ms) aussitôt suivi d'un masque (cf. ...
... We used the Rapid Parallel Visual Presentation (RPVP) [23] pardigm with forced-choice perceptual identification task -2AFC [31,32]. We used four types of stimuli, correct sentences (CS), incorrect sentences (Ins), non-word lists (Nw), and correct sentences with scrambled targets (Css). ...
Preprint
Full-text available
Reading is a complex cognitive task involving processes from different systems. The present work aims to identify some points of divergence reported in the reading literature and discuss them in a new experimental paradigm framework. Inspired by the paradigms of perceptual identification and rapid parallel presentation (RPVP), we emphasize that the originality of our experimental paradigm lies in the recruitment of multi-stable Arabic percepts within the region where low-level processing occurs (i.e., the visual span area). With good flexibility, the current paradigm has reached higher-order processing levels. In agreement with previous works highlighting the parafoveal-on-foveal effect, results suggest parallel word processing. Furthermore, they suggest a rapid extraction of syntactic and semantic information from words in sentences while attributing an advantage to semantic processing in the emergence of the sentence superiority effect.
... We used the Rapid Parallel Visual Presentation (RPVP) [23] pardigm with forced-choice perceptual identification task -2AFC [31,32]. We used four types of stimuli, correct sentences (CS), incorrect sentences (Ins), non-word lists (Nw), and correct sentences with scrambled targets (Css). ...
Preprint
Full-text available
Reading is a complex cognitive task involving processes from different systems. The present work aims to identify some points of divergence reported in the reading literature and discuss them in a new experimental paradigm framework. Inspired by the paradigms of perceptual identification and rapid parallel presentation (RPVP), we emphasize that the originality of our experimental paradigm lies in the recruitment of multi-stable Arabic percepts within the region where low-level processing occurs (i.e., the visual span area). With good flexibility, the current paradigm has reached higher-order processing levels. In agreement with previous works highlighting the parafoveal-on-foveal effect, results suggest parallel word processing. Furthermore, they suggest a rapid extraction of syntactic and semantic information from words in sentences while attributing an advantage to semantic processing in the emergence of the sentence superiority effect.
... For example, Reicher (1969) established a measure of WSE by comparing the minimum presentation duration needed to achieve 90 % accuracy when discriminating a target-vs foil-letter either alone, or within a word vs non-word string. This advantage to identify a letter within a word-string is robust to replication (e.g., Wheeler, 1970) and has been shown using response time versions of the task (e.g., Allen & Madden, 1990;Campbell, 1985); however, this WSE may dissipate under varying extraneous conditions, particularly if a mask is not used after the stimulus (e.g., Massaro, 1973). Importantly, this WSE measurement method also lacks a performance baseline, with reported effects measured by the absolute change in display time, accuracy, or response time (RT). ...
Article
Full-text available
The Word Superiority Effect (WSE) refers to the phenomenon where a single letter is recognized more accurately when presented within a word, compared to when it is presented alone or in a random string. However, previous research has produced conflicting findings regarding whether this effect also occurs in the processing of Chinese characters. The current study employed the capacity coefficient, a measure derived from the Systems Factorial Technology framework, to investigate processing efficiency and test for the superiority effect in Chinese characters and English words. We hypothesized that WSE would result in more efficient processing of characters/words compared to their individual components, as reflected by super capacity processing. However, contrary to our predictions, results from both the "same" (Experiment 1) and "different" (Experiment 2) judgment tasks revealed that native Chinese speakers exhibited limited processing capacity (inefficiency) for both English words and Chinese characters. In addition, results supported an English WSE with participants integrating English words and pseudowords more efficiently than nonwords, and decomposing nonwords more efficiently than words and pseudowords. In contrast, no superiority effect was observed for Chinese characters. To conclude, the current work suggests that the superiority effect only applies to English processing efficiency with specific context rules and does not extend to Chinese characters.
... In word recognition, models such as the Interactive Activation Model (McClelland and Rumelhart, 1981) and TRACE (Rumelhart and McClelland, 1982), feedback connections from word-level to phoneme-level enable a faster activation of phonemes in words than phonemes in nonwords by either enhancing the representation of expected phonemes or suppressing the competing phonemes. Even though these models can account for phenomena such as the word-superiority effect, faster detection of a letter on a masked visual image when presented in a word than in an unpronounceable nonword or in isolation (Reicher, 1969;Wheeler, 1970;Mewhort, 1967), they do not incorporate effects of sentence-and discourse-level constraints. It has also been demonstrated that mispronounced or ambiguous phonemes are more likely to be missed by participants during a detection task when the context is more constraining, indicating that top-down processing constraints interact with bottom-up sensory information to reduce the number of possible word candidates as the acoustic input unfolds (Marslen-Wilson and Welsh, 1978;Martin and Doumas, 2017). ...
Article
Full-text available
When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking. In recent years, a large body of work has demonstrated the tracking of the acoustic envelope and abstract linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal, or by internally-generated linguistic units, or by the interplay of both, remains contentious. In this study, we used naturalistic story-listening to investigate (1) whether phoneme-level features are tracked over and above acoustic edges, (2) whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features, and (3) whether the tracking of acoustic edges was enhanced or suppressed during comprehension of a first language (Dutch) compared to a statistically familiar but uncomprehended language (French). We first show that encoding models with phoneme-level linguistic features, in addition to acoustic features, uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units. We then show that word entropy enhances neural tracking of both acoustic and phonemic features when sentence- and discourse-context are less constraining. When language was not comprehended, acoustic features, but not phonemic ones, were more strongly modulated, but in contrast, when a native language is comprehended, phoneme features are more strongly modulated. Taken together, our findings highlight the flexible modulation of acoustic, and phonemic features by sentence and discourse-level constraint in language comprehension, and document the neural transformation from speech perception to language comprehension, consistent with an account of language processing as a neural filter from sensory to abstract representations.
... This is a task that requires the same responses to words and nonwords, while being heavily influenced by top-down lexical effects. For instance, many experiments have shown that it is easier to recognize letters when embedded in words than in nonwords (i.e., a word superiority effect; see also Reicher, 1969;Wheeler, 1970;McClelland, 1976;Prinzmetal, 1992;Grainger et al., 2003;Casaponsa and Duñabeitia, 2016;see Cattell, 1886, for the first demonstration). In the task, we presented each item briefly either intact (without diacritics) or with extra non-existent diacritical marks in the target language (Spanish) [e.g., words: amigo (friend) vs. ãmîgô; nonwords: agimo vs. ãgîmô]. ...
Article
Full-text available
Introduction: Recent research has reported that adding non-existent diacritical marks to a word produces a minimal reading cost compared to the intact word. Here we examined whether this minimal reading cost is due to: (1) the resilience of letter detectors to the perceptual noise (i.e., the cost should be small and comparable for words and nonwords) or (2) top-down lexical processes that normalize the percept for words (i.e., the cost would be larger for nonwords). Methods: We designed a letter detection experiment in which a target stimulus (either a word or a nonword) was presented intact or with extra non-existent diacritics [e.g., amigo (friend) vs. ãmîgô; agimo vs. ãgîmô]. Participants had to decide which of two letters was in the stimulus (e.g., A vs. U). Results: Although the task involved lexical processing, with responses being faster and more accurate for words compared to nonwords, we found only a minimal advantage in error rates for intact stimuli versus those with non-existent diacritics. This advantage was similar for both words and nonwords. Discussion: The letter detectors in the word recognition system appear to be resilient to non-existent diacritics without the need for feedback from higher levels of processing.
... In this paradigm-the Rapid Parallel Visual Presentation (RPVP) procedure-a word would be better recognized when it is embedded in a syntactically correct (grammatical) sequence. Such a finding would echo the word superiority effect, whereby a letter is better identified when presented embedded in a word rather than a random string of letters (Cattell, 1886, Reicher, 1969, Wheeler 1970). Snell and Grainger asked their participants to identify a single target word that was embedded either in a 4-word sentence or in an ungrammatical scrambled sequence of the same words. ...
Preprint
Full-text available
When a sequence of written words is briefly presented and participants are asked to identify just one word at a post-cued location, then word identification accuracy is higher when the word is presented in a grammatically correct sequence compared with an ungrammatical sequence. This sentence superiority effect has been reported in several behavioral studies and two EEG investigations. Taken together, the results of these studies support the hypothesis that the sentence superiority effect is primarily driven by rapid access to a sentence-level representation via partial word identification processes that operate in parallel over several words. Here we used MEG to examine the neural structures involved in this early stage of written sentence processing, and to further specify the timing of the different processes involved. Source activities over time showed grammatical vs. ungrammatical differences first in the left inferior frontal gyrus (IFG: 325-400 ms), then the left anterior temporal lobe (ATL: 475-525 ms), and finally in both left IFG and left posterior superior temporal gyrus (pSTG: 550-600 ms). We interpret the early IFG activity as reflecting the rapid bottom-up activation of sentence-level representations, including syntax, enabled by partly parallel word processing. Subsequent activity in ATL and pSTG is thought to reflect the constraints imposed by such sentence-level representations on on-going word-based semantic activation (ATL), and the subsequent development of a more detailed sentence-level representation (pSTG). These results provide further support for a cascaded interactive-activation account of sentence reading.
... Experimental evidence for this comes from paradigms that show that processing of one facial region is influenced by other regions, as in the part-whole effect, in which individual face parts are better recognized when seen in the context of the whole face (Tanaka and Farah 1993;Tanaka and Simonyi 2016) and the composite-face effect, in which the recognition of one half of the face is influenced by the other half (Murphy et al. 2017;Young et al. 1987). In contrast, it is thought that written English words are processed by their individual components (Pelli et al. 2003;Rumelhart and McClelland 1982), though words too can show some influence of wholeword structure, as in the word-superiority effect, in which a letter is more likely to be identified when flashed as part of a word than a string of random letters (Feizabadi et al. 2021;Reicher 1969;Wheeler 1970). To date there are few studies of holistic processing of Chinese text. ...
Article
Full-text available
The many-to-many hypothesis suggests that face and visual-word processing tasks share neural resources in the brain, even though they show opposing hemispheric asymmetries in neuroimaging and neuropsychologic studies. Recently it has been suggested that both stimulus and task effects need to be incorporated into the hypothesis. A recent study found dual-task interference between face and text functions that lateralized to the same hemisphere, but not when they lateralized to different hemispheres. However, it is not clear whether a lack of interference between word and face recognition would occur for other languages, particularly those with a morpho-syllabic script, like Chinese, for which there is some evidence of greater right hemispheric involvement. Here, we used the same technique to probe for dual-task interference between English text, Chinese characters and face recognition. We tested 20 subjects monolingual for English and 20 subjects bilingual for Chinese and English. We replicated the prior result for English text and showed similar results for Chinese text with no evidence of interference with faces. We also did not find interference between Chinese and English text. The results support a view in which reading English words, reading Chinese characters and face identification have minimal sharing of neural resources.
... To ensure that readers were precisely recognizing the target word, we included a letter probe task (Reicher, 1969;Wheeler, 1970) to collect a measurement of the reader's interpretation of the word. The task required that participants choose one of two letters that was presented in one of four possible locations of the word; one corresponded to the letter from the presented target word and the other corresponded to the letter from the orthographic neighbor. ...
Article
Readers extract information from a word from parafoveal vision prior to looking at it. It has been argued that parafoveal perception allows readers to initiate linguistic processes, but it is unclear which stages of word processing are engaged: the process of extracting letter information to recognize words, or the process of extracting meaning to comprehend them. This study used the event-related brain potential (ERP) technique to investigate how word recognition (indexed by the N400 effect for unexpected or anomalous compared to expected words) and semantic integration (indexed by the Late-positive component; LPC effect for anomalous compared to expected words) are or are not elicited when the word is perceived only in parafoveal vision. Participants read a target word following a sentence that made it expected, unexpected, or anomalous, and read the sentences presented three words at a time in the Rapid Serial Visual Presentation (RSVP) with flankers paradigm so that words were perceived in parafoveal and foveal vision. We orthogonally manipulated whether the target word was masked in parafoveal and/or foveal vision to dissociate the processing associated with perception of the target word from either location. We found that the N400 effect was generated from parafoveally perceived words, and was reduced for foveally perceived words if they were previously perceived parafoveally. In contrast, the LPC effect was only elicited if the word was perceived foveally, suggesting that readers must attend to a word directly in foveal vision in order to attempt to integrate its meaning into the sentence context.
... Additionally, we included a letter probe task (Reicher, 1969;Wheeler, 1970) requiring the participant to report what letter they actually saw in the letter position that differed between the two neighbors in order to ensure that the word was actually recognized and not misperceived as the expected word, especially when it was presented in the parafovea 1 . As discussed previously, it is unclear from eye movements, when an anomalous word is skipped for example, 1 A primary motivation for including this task was to be able to split trials post hoc by accuracy and determine if the patterns in the components of interest differed based on whether the participant consciously recognized that an anomaly had been presented. ...
Article
Full-text available
Word recognition begins before a reader looks directly at a word, as demonstrated by the parafoveal preview benefit and word skipping. Both low-level form and high-level semantic features can be accessed in parafoveal vision and used to promote reading efficiency. However, words are not recognized in isolation during reading; once a semantic representation is retrieved, it must be integrated with the broader sentence context. One open question about parafoveal processing is whether it is limited to shallow stages of lexico-semantic activation or extends to semantic integration. In the present two-experiment study, we recorded event-related brain potentials (ERPs) in response to a sentence-final word that was presented in foveal or parafoveal vision and was either expected, unexpected, or anomalous in the sentence context. We found that word recognition, indexed by the N400, ensued regardless of perception location whereas identification of the semantic fit of a word in its sentence context, indexed by the LPC, was only observed for foveally perceived but not parafoveally perceived words. This pattern was not sensitive to task differences that promote different levels of orthographic scrutiny, as manipulated between the two experiments. These findings demonstrate separate roles for parafoveal and foveal processing in reading. Public Significance Statement There has long been a public interest in the prospect of speed reading, but scientists have repeatedly challenged this possibility, pointing out that there is a tradeoff between reading speed and accuracy. In particular, if a reader skims the text and does not look at all or most of the words (as during "speed reading"), they cannot comprehend the text as well. The present study adds additional support, via neural data, for the idea that looking directly at a word is necessary to perform aspects of the reading process that lead to higher level understanding of a word and its context.
... In our experimental studies, we used a phenomenon described in the end of the 19 th century in Wilhelm Wundt's experimental psychology laboratory by Cattell (1886). This phenomenon, known as the "word superiority effect", has later become a popular target for cognitive psychologists (McClelland & Rumelhart, 1981;Reicher, 1969;Wheeler, 1970). The word superiority effect refers to the better recognition of letters presented within words as compared to isolated letters and to letters presented within random nonword letter strings, when presentation is brief, or masked, or contains visual noise, etc. ...
Article
Full-text available
The problem of consciousness is one of the core problems in the contemporary cogni-tive science. Driven by the neuroimaging boom, most researchers look for the neural correlates or signatures of consciousness and awareness in the human brain. However, we believe that the explanatory potential of the cultural-historical activity approach to this problem is far from being exhausted. We propose Cognitive Psychology of Activity research program, or the activity theory-based constructivism as an attempt to account for multiple phenomena of human awareness and attention. This approach relies upon cultural-historical psychology and the concept of mediation by Lev S. Vygotsky, activity theory and the concept of image generation by Alexey N. Leontiev, the physiology of ac-tivity and the metaphor of movement construction by Nikolai A. Bernstein, transferred to the psychology of perception as image construction by a number of Russian researchers in 1960-es, and the understanding of attention as action by evolutionary cognitive psy-chologists of 1980-es. The central concept of our approach is a concept of task, defined by Leontiev as “a goal assigned in specific circumstances”. The goal determines choice and use of available cultural means (“mediators”) consistent with the circumstances or conditions of task performance, which in turn provide for the construction of processing units allowing for more successful (“attentive”) performance and for the awareness of visual stimuli which could otherwise be missed or ignored. The perceptual task accom-plishment is controlled at several levels organized heterarchically, with possible strategic reorganizations of this system demonstrating the constructive nature of human cognition.
... The leading computational models simulating human visual word recognition, have identified a parallel process that involves a bottom-up operation of identifying letter features and whole letters and a top-down operation of lexical access to words and word parts (Reichle, 2021). This explains the well-established word superiority effect (Reicher, 1969;Wheeler, 1970) (words are easier to recognize than letters), as words receive input signals from both the top-down and bottom-up operations while letters mostly receive input signals from the bottom-up operation of letter features. It has further been shown that for sentence reading, letter decoding accounts for 62% of the reading rate, word reading accounts for 16%, while contextual structure of sentences accounts for the remaining 22% (Pelli and Tillman, 2007). ...
... Extended to sentences (Baddeley, Hitch & Allen, 2009;Toyota, 2001), the SSE refers to a participant's ability to recall a briefly presented target word more often in the context of a grammatical sentence as opposed to an ungrammatical one. With the development of the post-cued partial report method (Reicher, 1969;Wheeler, 1970), researchers used the Rapid Parallel Visual Presentation (RPVP) paradigm to show that whole word identification is active before individual letter identification in sentence reading (McClelland & Rumelhart, 1981). Recent work has used the RPVP paradigm with post-cued partial report to identify a contribution of syntax to the SSE , with target word identification being greater when embedded in syntactically grammatical as opposed to ungrammatical sentences. ...
Article
Full-text available
A long-standing question about bilingualism concerns which representations are shared across languages. Recent work has revealed a bilingual Sentence Superiority Effect (SSE) among French–English bilinguals reading mixed-language sentences: identification of target words is more accurate in syntactically grammatical than ungrammatical sentences. While this ability to connect words across the two languages has been attributed to a rapid parsing of shared syntactic representations, outstanding questions remain about the role of semantics. Here, we replicate the SSE in Spanish–English bilinguals (e.g., better identification of vacío in “my vaso is vacío” [my glass is empty] than “is vaso my vacío” [is glass my empty]). Importantly, we report evidence that semantics do contribute to word identification, but significantly less than syntax and only in the context of syntactically grammatical sentences. Moreover, the effect is moderated by language proficiency, further constraining the conditions under which shared cross-linguistic representations are rapidly accessed in the bilingual mind.
... In the IAM, the role of feedback from higher levels in the hierarchy (i.e., the word level) is to enhance and direct the parallel processing of letters in written words. This feedback, they suggest, is the reason why words are processed more efficiently than single letters or letters in random strings, as revealed by the classical 'word superiority effect' ( [Reicher, 1969]; [Wheeler, 1970]). The idea of feedback in word recognition is also mentioned by Dehaene et al. ( [Dehaene et al., 2005]) in their proposal for a 'neural code for written words', where they point out that feedback and lateral connections are numerous in the visual system, and probably contribute to shaping the neurons receptive field, for instance by enforcing probabilistic relations amongst consecutive letters, or by disambiguating letters and bigrams within words (thus explaining the word superiority effect)' (p. ...
Preprint
Full-text available
In a previous paper, we described the first person experience of the first author (K.H.) with debut and partial remission of pure alexia following a stroke in the left posterior cerebral artery. In addition, we presented neuropsychological data on reading and visual recognition over a number of years following his stroke. Here, we present an outline of a model of reading and visual word recognition, developed by K.H. in an attempt to understand his reading problems and their development over time. K.H. had no knowledge of reading models before his stroke, but was familiar with computer science and models for computer vision. His model of reading was developed based on introspective observation of his loss and re-learning of reading, and aims to explain both the breakdown of his reading process and the partial remission. In closing, we discuss K.H.’s proposed model in relation to published cognitive models of reading, with a particular focus on visual word recognition.Some of the similarities between K.H.’s introspective model and textbook models of reading are striking and intriguing. In particular, K.H. suggests that i) visual word recognition may be accomplished by different routes, and ii) fast and fluent word recognition is likely to be accomplished by a specialized module that is damaged in pure alexia, and iii) visual letter and word recognition is achieved through the computation of abstract letter representations independent of size and font.
... First, from a bottom-up, stimulusdriven perspective, word recognition and reading involve scanning letters or chunks of letters (graphemes) in quick succession and grouping them into words, which is a more demanding task than face recognition (e.g., a limited set of facial features with broadly similar spatial arrangement and faces are not presented in quick spatial and temporal succession). Second, feedback from lexical processing to early perceptual processing contributes to word cohesiveness (Reicher, 1969;Wheeler, 1970). For example, the advantage of word context is interpreted as a top-down influence of whole-word representations on letter recognition (McClelland & Rumelhart, 1981). ...
Article
Full-text available
The question of whether word and face recognition rely on overlapping or dissociable neural and cognitive mechanisms received considerable attention in the literature. In the present work, we presented words (aligned or misaligned) superimposed on faces (aligned or misaligned) and tested the interference from the unattended stimulus category on holistic processing of the attended category. In Experiment 1, we found that holistic face processing is reduced when a face was overlaid with an unattended, aligned word (processed holistically). In Experiment 2, we found a similar reduction of holistic processing for words when a word was superimposed on an unattended, aligned face (processed holistically). This reciprocal interference effect indicates a trade-off in holistic processing of the two stimuli, consistent with the idea that word and face recognition may rely on non-independent, overlapping mechanisms.
... Prior research has provided evidence for interesting parallels between letter-word processing on the one hand, and word-sentence processing on the other. The "word superiority effect" [17][18][19] refers to the higher accuracy in single letter identification when the target letter is presented in a word (e.g., the letter B in TABLE) compared with a pseudoword (e.g., the letter B in PABLE). More recently, a "sentence superiority effect" 20,21 has been reported whereby identification of a single word target is better when that word is presented in the context of a correct sentence (e.g., target BOY in the sentence: "the boy runs fast") compared with identification of the same word at the same position in an ungrammatical sequence (e.g., "runs boy fast the"). ...
Article
Full-text available
Much prior research on reading has focused on a specific level of processing, with this often being letters, words, or sentences. Here, for the first time in adult readers, we provide a combined investigation of these three key component processes of reading comprehension. We did so by testing the same group of participants in three tasks thought to reflect processing at each of these levels: alphabetic decision, lexical decision, and grammatical decision. Participants also performed a non-reading classification task, with an aim to partial-out common binary decision processes from the correlations across the three main tasks. We examined the pairwise partial correlations for response times (RTs) in the three reading tasks. The results revealed strong significant correlations across adjacent levels of processing (i.e., letter-word; word-sentence) and a non-significant correlation between non-adjacent levels (letter-sentence). The results provide an important new benchmark for evaluating computational models that describe how letters, words, and sentences contribute to reading comprehension.
... This would be the case if the visual system learns specialized features for detecting combinations of letters en route to whole word representations [71]. Indeed, classic studies on the word superiority effect show that letter recognition is improved when letters appear in the context of a word [72,73]. However, focusing on individual letter-representations is not wholly unjustified, as there is some empirical support that letter string representations are primarily linear combinations of letter representations [44]. ...
Article
Full-text available
After years of experience, humans become experts at perceiving letters. Is this visual capacity attained by learning specialized letter features, or by reusing general visual features previously learned in service of object categorization? To explore this question, we first measured the perceptual similarity of letters in two behavioral tasks, visual search and letter categorization. Then, we trained deep convolutional neural networks on either 26-way letter categorization or 1000-way object categorization, as a way to operationalize possible specialized letter features and general object-based features, respectively. We found that the general object-based features more robustly correlated with the perceptual similarity of letters. We then operationalized additional forms of experience-dependent letter specialization by altering object-trained networks with varied forms of letter training; however, none of these forms of letter specialization improved the match to human behavior. Thus, our findings reveal that it is not necessary to appeal to specialized letter representations to account for perceptual similarity of letters. Instead, we argue that it is more likely that the perception of letters depends on domain-general visual features.
... With the aim of studying letters and words recognition, a great deal of research has investigated the word superiority effect, which refers to the fact that observers are better at recognizing a letter within the context of a word than in isolation (Reicher, 1969). The word superiority effect has been quantified by measuring percent of correct identification, vocal reaction times, and contrast thresholds, i.e., the minimum amount of ink in the stimuli needed by the observers to reach a performance criterion (Wheeler, 1970;Reicher, 1969;Jordan & deBruijn, 1993;Babkoff, Faust & Lavidor, 1997;Pelli et al., 2003). In all these cases the magnitude of the effect is about a 1.4 advantage for words vs. nonwords or letters in isolation. ...
Technical Report
Full-text available
Visual and linguistic factors in literacy acquisition: Instructional Implications For Beginning Readers in Low-Income Countries. A literature review prepared for the Global Partnership for Education, c/o World Bank.
... Another possibility is that Dyslexie typeface is beneficial at the letter level. Key theories of reading (e.g., Perry, Ziegler, & Zorzi, 2007;Rumelhart & McClelland, 1982) argue that processes underlying word recognition are highly interactive, with input from both bottom-up (letter/feature level) and top-down (word level) processes, as evidenced by the fact that letters are detected more quickly when embedded in words than in nonwords (e.g., STXRN) or even than when presented alone (Reicher, 1969;Wheeler, 1970). Such top-down effects have also been shown in young, developing readers (Coch, Mitra, & George, 2012), including those with dyslexia (Grainger, Bouttevin, Truc, Bastien, & Ziegler, 2003). ...
Article
Full-text available
Children with dyslexia are at risk of poor academic attainment and lower life chances if they do not receive the support they need. Alongside phonics‐based interventions which already have a strong evidence base, specialist dyslexia typefaces have been offered as an additional or alternative form of support. The current study examined whether one such typeface, Dyslexie, had a benefit over a standard typeface in identifying letters, reading words, and reading passages. 71 children, aged 8–12 years, 37 of whom had a diagnosis of dyslexia, completed a rapid letter naming task, a word reading efficiency task, and a passage reading task in two typefaces, Dyslexie and Calibri. Spacing between letters and words was kept constant. Results showed no differences in word or passage reading between the two typesfaces, but letter naming did appear to be more fluent when letters were presented in Dyslexie rather than Calibri text for all children. The results suggest that a typeface in which letters are designed to be distinctive from one another may be beneficial for letter identification and that an intervention in which children are taught letters in a specialist typeface is worthy of consideration.
... The first stage is a visual analysis of the word that begins from stimulus onset and lasts for about 200 ms [35]. There, we observe the effects of the word's orthographic features, as well as the familiarity effects (e.g., at the P150 component; [36,37]). The second phase is associated with an automatic processing of the word and can be identified by the onset of emotional modulation, which starts between 200 and 300 ms after onset [32]. ...
Article
Full-text available
Warmth and competence are fundamental dimensions of social cognition. This also applies to the interpretation of ambiguous symbolic stimuli in terms of their relation to warmth or competence. The affective state of an individual may affect the way people interpret the neutral stimuli in the environment. As previous findings have shown, it is possible to alter the perception of neutral social stimuli in terms of warmth vs. competence by eliciting an incidental affect with the use of emotion-laden words. In the current experiment, we expected the valence and origin of an affective state, factors ascribing emotionally laden words, to be able to switch the interpretation of the neutral objects. We have shown in behavioural results that negative valence and reflective origins promote the interpretation of unknown objects in terms of competence rather than warmth. Furthermore, electrophysiological-response-locked analyses revealed differences specific to negative valence while making the decision in the ambiguous task and while executing it. The results of the current experiment show that the usage of warmth and competence in social cognition is susceptible to affective state manipulation. In addition, the results are coherent with the evolutionary perspective on social cognition (valence effects) as well as with predictions of the dual mind model of emotion (origin effects).
... GLMMs replicated the effects of word-form availability and contingency and additionally revealed independent phrase-superiority effects where morphemes were better reproduced in phrasal contexts of higher string-frequency. These are the phrasal equivalents of word-superiority effects (Wheeler, 1970) whereby recognition of a letter is more accurate when it is part of a meaningful word than when it is alone. Taken together, these findings demonstrated that morpheme acquisition reflects the distributional properties of learners' experience and the mappings therein between lexis, morphology, phraseology, and semantics. ...
Article
Full-text available
Second language (L2) speakers have especial difficulty learning and processing morphosyntax. I present a usage-based analysis of this phenomenon. Usage-based approaches to language learning hold that we learn constructions (form-function mappings, conventionalized in a speech community) from language usage by means of general cognitive mechanisms (exemplar-based, rational, associative learning). An individual’s language system emerges from the conspiracy of these associations. I take the broad theoretical framework of language as a complex adaptive system and then focus in upon several subcomponents of the ecology including: cognitive linguistics and construction grammar; the psychology of implicit and explicit learning; effects of contingency and salience upon associative learning; the linguistic cycle: Zipf’s law, shortening, and grammaticalization; the low learnability of morphology that results from language change; learned attention, blocking, and transfer; form-focused dialogic feedback or instruction in second language acquisition (L2A); the types and tokens of exemplars that lead the learning of particular morphemes from usage and the distributional effects of frequency, reliability, and formulaic contexts. In terms of fundamental principles of associative learning: Low salience, low contingency, and redundancy all lead to morphological form-function mappings being less well learned. Compounding this, adult L2 acquirers show effects of learned attention and blocking as a result of L1-tuned automatized processing of language. I review a series of experimental studies of learned attention and blocking in L2A. I describe educational interventions targeted upon these phenomena. Form-focused instruction recruits learners’ explicit, conscious processing capacities and allows them to notice novel L2 constructions. Once a construction has been represented as a form-function mapping, its use in subsequent implicit processing can update the statistical tallying of its frequency of usage and probabilities of form-function mapping, consolidating it into the system.
... Students are actively involved in the discovery of various concepts and principles through problem solving or the results of abstraction of various cultural objects. Concepts and rules in mathematics can be mastered in full by students, when students are actively involved in thinking about, discovering, and reconstructing the mathematical knowledge that is being studied (Ernest, 1991;Wheeler 1970). The teacher encourages and motivates students to gain experience by doing activities that allow students to discover mathematical concepts and principles for themselves. ...
Conference Paper
Full-text available
Several studies have shown that the monochromatic Tower of Hanoi (TOH) Puzzle enhances the solving problem ability of students in mathematics. This study aimed to determine the students' solving problem ability through the TOH puzzle presented in three realias-the unicolored, bicolored, and multicolored. The study used the randomized block design, with random assignment as strategy in selecting 117 samples participants. Results showed that the female group performed better by time in solving the bicolored realia. This confirms that females have a keener sense of color perception than males. Further, the study also revealed how students themselves developed certain strategies in solving the puzzle. The qualitative data generated through interview showed an improvement in student's problem-solving ability in general. The study recommended that the Tower of Hanoi Puzzle is part of the problem-solving activity in classroom instruction.
... Letters in meaningful words were processed more readily than individual letters forming meaningless strings (e.g. Wheeler, 1970). The WSE has been explained as a memory effect, in which it is more efficient to rehearse one word, rather than several letters (Massaro, 1973). ...
Thesis
p>Three paradigms were used to compare performance across meaningful and meaningless stimuli. Within each type of stimulus the relative performances with simple and complex objects were examined. The complexity was used as an analytical tool. In a segment-based representation, with relatively independent parts, there would be more parts in the representation of a complex stimulus that in that of a simple stimulus. In holistic representations the parts would be less independent; the number of parts would be less influential that the relationship among them. The first study used a mental rotation task. A segmented representation would show an interactive effect of orientation and complexity, whereas a holistic representation would not. The findings suggested that the meaningful stimuli were rotated part by part, whereas the meaningless objects were rotated holistically. The second study used a part search task. It was predicted that the dependence among the parts in a holistic representation would result in a greater difference in performance across complexities than in a segmented representation with independent parts. The complexity of the stimulus showed a greater difference in the meaningful stimuli than in the meaningless stimuli. Assuming that holistic representations made more use of configural information than segment-based representations, the final study tested the contribution of configural information to the representation during a binary forced-choice probe task. The meaningful stimuli showed a greater advantage when configural information was present, especially the complex stimuli, relative to the meaningless stimuli. The findings suggested that meaningful objects were represented more holistically than comparable meaningless objects; this difference is greater in the complex stimuli.</p
... However, each letter could be shown with different graphemes in cursive writing systems like Persian orthography. Partial information from words may help participants guess the identity of the critical target letter (Reicher, 1969;Wheeler, 1970). The similarity in the Persian letters, especially when letters are embedded in words, is the basis of this research. ...
Article
Full-text available
In this study, we compared children’s and adults’ ability to accurately identify target words in written minimal pairs (WMPs) with graphemically similar letters while accounting for factors such as gender, similarity of the middle letter in WMPs, mono- versus dimorphemic WMPs, number of syllable, homography, and imageability. Fifty children and fifty adults were exposed to a distractor stimulus as a pre-mask, followed by the target, and then a post-mask stimulus. Subsequently, the corresponding WMPs including the target word and its graphemically minimal contrast were presented to the participants to obtain their reaction time (RT) in accurately identifying the target word. Results demonstrated that children tend to slow down their reaction as a compensatory strategy to circumvent their less mature knowledge of graphophonic units/morphemes to achieve accuracy during word recognition. In addition, among all controlled factors, children’s RT was significantly influenced by similarity of the middle letter in the WMPs. Adults’ RT, however, was influenced by factors such as gender, similarity of the middle letter in WMPs, and homography.
... For example, Jensen (1998) claimed that choice reaction time is an indirect measure of neuronal conduction speed, but is it? Was the way people retrieved letter names in the Posner and Mitchell (1967) task identical or even similar to the way they retrieved letter names when the letters were embedded in meaningful words in real-world verbal contexts (Reicher 1969;Wheeler 1970)? Sometimes, abstracting a process from its real-world context may change the process or how it is executed. ...
Article
Full-text available
This article discusses the issues of the basic processes underlying intelligence, considering both historical and contemporary perspectives. The attempt to elucidate basic processes has had, at best, mixed success. There are some problems with pinpointing the underlying basic processes of intelligence, both in theory and as tested, such as what constitutes a basic process, what constitutes intelligence, and whether the processes, basic or not, are the same across time and space (cultural contexts). Nevertheless, the search for basic processes has elucidated phenomena of intelligence that the field would have been hard-pressed to elucidate in any other way. Intelligence cannot be fully understood through any one conceptual or methodological approach. A comprehensive understanding of intelligence requires the converging operations of a variety of approaches to it.
... In our experimental studies, we used a phenomenon described in the end of the 19 th century in Wilhelm Wundt's experimental psychology laboratory by Cattell (1886). This phenomenon, known as the "word superiority effect", has later become a popular target for cognitive psychologists (McClelland & Rumelhart, 1981;Reicher, 1969;Wheeler, 1970). The word superiority effect refers to the better recognition of letters presented within words as compared to isolated letters and to letters presented within random nonword letter strings, when presentation is brief, or masked, or contains visual noise, etc. ...
Book
Full-text available
The ISCAR 2017 conference decided to take a 360° view of the Society’s landscape. It was thus an opportunity to reach out at the roots of cultural-historical theory and to invite scholars from Russia to contribute to a Special Issue to better grasp the state of our scholarship in practice. The congress theme invoked the past, the present, and the future of cultural-historical activity research and it is with that intention that the two editors invite you to enjoy reading our Special Issue. We hope that it will allow you to engage in challenging conversations issues that could lead to revisit ideas or expand upon our current practices. We propose 22 articles written by more than 24 Russian authors.
... What is the evidence in favor of the architecture described in Fig. 1 and the cascaded-interactive nature of processing between the three levels in that architecture? With respect to interactions between letter-level and word-level processing (path (a) in Fig. 1), the most relevant research here concerns the so-called "word superiority effect" [13][14][15] . This effect refers to the higher accuracy in single letter identi cation when the target letter is presented in a word (e.g., the letter B in TABLE) compared with a pseudoword (e.g., the letter B in PABLE). ...
Preprint
Full-text available
Much prior research on reading has focused on a specific level of processing, with this often being letters, words, or sentences. Here, for the first time, we provide a combined investigation of these three key component processes of reading comprehension. We did so by testing the same group of participants in three tasks thought to reflect processing at each of these levels: alphabetic decision, lexical decision, and grammatical decision. Participants also performed a non-reading classification task, with an aim to partial-out common binary decision processes from the correlations across the three main tasks. We examined the pairwise partial correlations for response times (RTs) in the three reading tasks. The results revealed strong significant correlations across adjacent levels of processing (i.e., letter-word; word-sentence) and a non-significant correlation between non-adjacent levels (letter-sentence). The results fit best with hierarchically organized cascaded-interactive accounts of how letters, words, and sentences contribute to reading comprehension.
... 24-25). Interestingly, she then supplies an analogy from a perceptual domain, namely vision, citing Wheeler's (1970) work, which shows that letters are recognized faster in the context of a word, i.e., the recognition (top down) of a word aids the recognition of a letter, and vice versa. This is precisely the type of example from which the PP paradigm has received significant support (e.g. ...
Article
Full-text available
Predictive Processing (PP) is an increasingly influential neurocognitive-computational framework. PP research has so far focused predominantly on lower level perceptual, motor, and various psychological phenomena. But PP seems to face a “scale-up challenge”: How can it be extended to conceptual thought, language, and other higher cognitive competencies? Compositionality, arguably a central feature of conceptual thought, cannot easily be accounted for in PP because it is not couched in terms of classical symbol processing. I argue, using the example of language, that there is no strong reason to think that PP cannot be scaled up to higher cognition. I suggest that the tacitly assumed common-sense conception of language as Generative Grammar (“folk linguistics”) and its notion of composition leads to the scale-up concerns. Fodor’s Language of Thought Hypothesis (LOTH) plays the role of a cognitive computational paradigm for folk linguistics. Therefore, we do not take LOTH as facing problems with higher cognition, at least with regard to compositionality. But PP can plausibly play the role of a cognitive-computational paradigm for an alternative conception of language, namely Construction Grammar. If Construction Grammar is a plausible alternative to folk linguistics, then PP is not in a worse position than LOTH.
... Moreover, in psycholinguistic research the importance of the context has been highlighted by the word superiority effect (WSE, Reicher, 1969;Wheeler, 1970) whereby letters are identified better when embedded within a word than in a nonword string, or when presented alone. ...
Article
Full-text available
Social distancing and isolation have been imposed to contrast the spread of COVID‐19. The present study investigates whether social distancing affects our cognitive system, in particular the processing of different types of brand logos in different moments of the pandemic spread in Italy. In a size discrimination task, six different logos belonging to three categories (letters, symbols, and social images) were presented in their original format and spaced. Two samples of participants were tested: one just after the pandemic spread in Italy, the other one after 6 months. Results showed an overall distancing effect (i.e., spaced stimuli are processed slower than original ones) that interacted with the sample, revealing a significant effect only for participants belonging to the second sample. However, both groups showed a distancing effect modulated by the type of logo as it only emerged for social images. Results suggest that social distancing behaviors have been integrated in our cognitive system as they appear to affect our perception of distance when social images are involved. In this manuscript, we investigate the possibility that the social distancing imposed to contrast the spread of COVID‐19 affected our cognitive system. We focused on the processing of commercial logos presented originally and spaced. Results showed that the social distancing behaviors imposed have been integrated in our cognitive system since it affects our perception of distancing processing when social images are involved.
Article
Research has shown that information processing differences associated with autism could impact on language and literacy development. This study tested an approach to autistic cognition that suggests learning occurs via prediction errors, and autistic people have very precise and inflexible predictions that result in more sensitivity to meaningless signal errors than non-autistic readers. We used this theoretical background to investigate whether differences in prediction coding influence how orthographic (Experiment 1) and semantic information (Experiment 2) is processed by autistic readers. Experiment 1 used a lexical decision task to test whether letter position information was processed less flexibly by autistic than non-autistic readers. Three types of letter strings: words, transposed letter and substituted letters nonwords were presented. Experiment 2 used a semantic relatedness task to test whether autistic readers processed words with high and low semantic diversity differently to non-autistic readers. Results showed similar transposed letter and semantic diversity effects for all readers; indicating that orthographic and semantic information are processed similarly by autistic and non-autistic readers; and therefore, differences in prediction coding were not evident for these lexical processing tasks.
Article
When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking. In recent years, a large body of work has demonstrated the tracking of the acoustic envelope and abstract linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal, or by internally-generated linguistic units, or by the interplay of both, remains contentious. In this study, we used naturalistic story-listening to investigate (1) whether phoneme-level features are tracked over and above acoustic edges, (2) whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features, and (3) whether the tracking of acoustic edges was enhanced or suppressed during comprehension of a first language (Dutch) compared to a statistically familiar but uncomprehended language (French). We first show that encoding models with phoneme-level linguistic features, in addition to acoustic features, uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units. We then show that word entropy enhances neural tracking of both acoustic and phonemic features when sentence- and discourse-context are less constraining. When language was not comprehended, acoustic features, but not phonemic ones, were more strongly modulated, but in contrast, when a native language is comprehended, phoneme features are more strongly modulated. Taken together, our findings highlight the flexible modulation of acoustic, and phonemic features by sentence and discourse-level constraint in language comprehension, and document the neural transformation from speech perception to language comprehension, consistent with an account of language processing as a neural filter from sensory to abstract representations.
Article
When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking. In recent years, a large body of work has demonstrated the tracking of the acoustic envelope and abstract linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal, or by internally-generated linguistic units, or by the interplay of both, remains contentious. In this study, we used naturalistic story-listening to investigate (1) whether phoneme-level features are tracked over and above acoustic edges, (2) whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features, and (3) whether the tracking of acoustic edges was enhanced or suppressed during comprehension of a first language (Dutch) compared to a statistically familiar but uncomprehended language (French). We first show that encoding models with phoneme-level linguistic features, in addition to acoustic features, uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units. We then show that word entropy enhances neural tracking of both acoustic and phonemic features when sentence- and discourse-context are less constraining. When language was not comprehended, acoustic features, but not phonemic ones, were more strongly modulated, but in contrast, when a native language is comprehended, phoneme features are more strongly modulated. Taken together, our findings highlight the flexible modulation of acoustic, and phonemic features by sentence and discourse-level constraint in language comprehension, and document the neural transformation from speech perception to language comprehension, consistent with an account of language processing as a neural filter from sensory to abstract representations.
Article
When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking. In recent years, a large body of work has demonstrated the tracking of the acoustic envelope and abstract linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal, or by internally-generated linguistic units, or by the interplay of both, remains contentious. In this study, we used naturalistic story-listening to investigate (1) whether phoneme-level features are tracked over and above acoustic edges, (2) whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features, and (3) whether the tracking of acoustic edges was enhanced or suppressed during comprehension of a first language (Dutch) compared to a statistically familiar but uncomprehended language (French). We first show that encoding models with phoneme-level linguistic features, in addition to acoustic features, uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units. We then show that word entropy enhances neural tracking of both acoustic and phonemic features when sentence- and discourse-context are less constraining. When language was not comprehended, acoustic features, but not phonemic ones, were more strongly modulated, but in contrast, when a native language is comprehended, phoneme features are more strongly modulated. Taken together, our findings highlight the flexible modulation of acoustic, and phonemic features by sentence and discourse-level constraint in language comprehension, and document the neural transformation from speech perception to language comprehension, consistent with an account of language processing as a neural filter from sensory to abstract representations.
Article
Full-text available
During reading, the brain is confronted with many relevant objects at once. But does lexical processing occur for multiple words simultaneously? Cognitive science has yet to answer this prominent question. Recently it has been argued that the issue warrants supplementing the field's traditional toolbox (response times, eye-tracking) with neuroscientific techniques (EEG, fMRI). Indeed, according to the OB1-reader model, upcoming words need not impact oculomotor behavior per se, but parallel processing of these words must nonetheless be reflected in neural activity. Here we combined eye-tracking with EEG, time-locking the neural window of interest to the fixation on target words in sentence reading. During these fixations, we manipulated the identity of the subsequent word so that it posed either a syntactically legal or illegal continuation of the sentence. In line with previous research, oculomotor measures were unaffected. Yet, syntax impacted brain potentials as early as 100 ms after the target fixation onset. Given the EEG literature on syntax processing, the presently observed timings suggest parallel word reading. We reckon that parallel word processing typifies reading, and that OB1-reader offers a good platform for theorizing about the reading brain.
Book
Full-text available
The Society of Applied Neuroscience, in cooperation with the Aristotle University of Thessaloniki, had the pleasure to host the SAN2022 Conference in Thessaloniki, Greece in 15-17 September 2022. Neuroscience, especially applied Neuroscience, is now at a crossroad: there exist enough results and evidence that provide the promise to enable proper methodologies, techniques and technologies to be translated to clinical practice or have a direct impact in certain application fields and impact the wider society and its key problems and life-time challenges. We feel confident, that as a Society, we provide some bits in this puzzle towards the achievement of this endeavor. The Conference put together a highly promising scientific program with a total of 60 conference papers selected after a peer-review process (presented as oral and poster presentations) and a unique list of 6 workshops (most of them hands- on or with the participation of practitioners) and 8 symposia (all of them in hot themes/topics). The Conference Proceedings were edited by Prof. Panagiotis D. Bamidis, President of the Society of Applied Neuroscience and Dr. Alkinoos Athanasiou. The author names of each conference paper may be found at the schedule and at the paper's corresponding pages.
Preprint
Full-text available
When we comprehend language from speech, the phase of the neural response aligns with particular features of the speech input, resulting in a phenomenon referred to as neural tracking . In recent years, a large body of work has demonstrated the tracking of the acoustic envelope and abstract linguistic units at the phoneme and word levels, and beyond. However, the degree to which speech tracking is driven by acoustic edges of the signal, or by internally-generated linguistic units, or by the interplay of both, remains contentious. In this study, we used naturalistic story-listening to investigate (1) whether phoneme-level features are tracked over and above acoustic edges, (2) whether word entropy, which can reflect sentence- and discourse-level constraints, impacted the encoding of acoustic and phoneme-level features, and (3) whether the tracking of acoustic edges was enhanced or suppressed during comprehension of a first language (Dutch) compared to a statistically-familiar but uncomprehended language (French). We first show that encoding models with phoneme-level linguistic features, in addition to acoustic features, uncovered an increased neural tracking response; this signal was further amplified in a comprehended language, putatively reflecting the transformation of acoustic features into internally-generated phoneme-level representations. Phonemes were tracked more strongly in a comprehended language, suggesting that language comprehension functions as a neural filter over acoustic edges of the speech signal as it transforms sensory signals into abstract linguistic units. We then show that word entropy enhances neural tracking of both acoustic and phonemic features when sentence- and discourse-context are less constraining. When language was not comprehended, acoustic features, but not phonemic ones, were more strongly modulated, but in contrast, when a first language is comprehended, phoneme features are more strongly modulated. Taken together, our findings highlight the flexible modulation of acoustic, and phonemic features by sentence and discourse-level constraint in language comprehension, and document the neural transformation from speech perception to language comprehension, consistent with an account of language processing as a neural filter from sensory to abstract representations.
Book
Full-text available
En los últimos años se ha producido un profundo cambio conceptual respecto a la lectura. Los resultados de la investigación en este campo han ido mostrando que los postulados formulados por los modelos predominantes antes de finales de los 70, que concebían al lector como un receptor pasivo del contenido del texto o como un ejecutor de destrezas rutinarias, y que durante mucho tiempo han servido de fundamento a los procedimientos tradicionales de enseñanza de la comprensión lectora, carecen de validez. Ante ello han ido surgiendo una serie de modelos alternativos (interactivos, metacognitivos, etc.) Que, desde una perspectiva cognitiva, consideran que la comprensión lectora es un proceso de construcción del significado del texto en el que el lector participa activamente, y en el que interactúan múltiples variables. La investigación en el seno de estos modelos ha demostrado que la ejecución efectiva dentro de cualquier dominio complejo, como lo es el proceso de comprensión lectora, requiere conocimiento sobre ese dominio, procedimientos específicos para operar en él y conocimientos y estrategias más generales que favorezcan la actuación consciente y el autocontrol. También se ha comprobado que las personas con déficits cognitivos suelen experimentar problemas en todas estas áreas, por lo que la instrucción más efectiva es la que las tiene en cuenta a todas ellas, y fomenta la participación activa y la actuación independiente. Sin embargo, son muy escasos los programas que, centrándose en el proceso, se han desarrollado para enseñar estrategias múltiples de comprensión a personas con déficits cognitivos y, con frecuencia, en la rehabilitación de estos sujetos se siguen utilizando los procedimientos tradicionales de enseñanza de la comprensión lectora, que fundamentalmente se centran en el producto. Por ello, con el objetivo de establecer el marco general y los principios básicos necesarios para crear un programa de instrucción adecuado (y una batería de evaluación válida para comprobar su eficacia), en la primera parte de esta tesis: se describe brevemente el contexto general en el que se enmarca nuestro estudio, y se analizan los principales modelos generales sobre lectura y específicos sobre comprensión lectora que se han propuesto; nos ocupamos de la naturaleza y funcionamiento de las principales variables, procesos y componentes que intervienen en la actividad de comprensión de textos; se analizan los distintos métodos que se han propuesto para evaluar la comprensión lectora, haciendo especial hincapié en sus ventajas y limitaciones; se describen las diferencias entre los lectores de alta y baja capacidad de comprensión; se revisan algunos de los estudios experimentales de intervención que han intentado mejorar la comprensión lectora en alumnos de educación básica; nos ocupamos de precisar lo que entendemos por "sujetos con déficits cognitivos", y abordaremos la cuestión de su enseñanza, especialmente en relación a la comprensión lectora. Tras revisar los supuestos teóricos que han guiado nuestra investigación se describe y discute, en la segunda parte, el trabajo experimental realizado, el cual ha consistido en el diseño y valoración de un programa encaminado a la enseñanza de estrategias múltiples para la comprensión de textos a personas con déficits cognitivos. Los resultados obtenidos muestran la superioridad del programa que hemos creado frente a un procedimiento tradicional de enseñanza de la comprensión lectora. En concreto, nuestro programa fue claramente superior a la hora de favorecer el desarrollo del conocimiento y uso de las estrategias de control del proceso de comprensión, y del uso de las estrategias necesarias para la realización de inferencias y predicciones, y también se mostró superior para fomentar el desarrollo del conocimiento y uso de estrategias de síntesis.
Article
Full-text available
Quantitative predictions are made from a model for word recognition. The model has as its central feature a set of "logogens," devices which accept information relevant to a particular word response irrespective of the source of this information. When more than a threshold amount of information has accumulated in any logogen, that particular response becomes available for responding. The model is tested against data available on (1) the effect of word frequency on recognition, (2) the effect of limiting the number of response alternatives, (3) the interaction of stimulus and context, and (4) the interaction of successive presentations of stimuli. Implications of the underlying model are largely upheld. Other possible models for word recognition are discussed as are the implications of the logogen model for theories of memory. (30 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
INVESTIGATES WHETHER THE RELATION BETWEEN THE FREQUENCY OF OCCURRENCE OF A WORD AND ITS SENSORY THRESHOLD CAN PROPERLY BE ASCRIBED TO RESPONSE BIAS. 7 MODELS FOR THE RECOGNITION PROCESS ARE TESTED AGAINST DATA OF C.R. BROWN AND H. RUBENSTEIN (SEE 36:2). THE MODELS ARE OF 2 KINDS: PROBABILISTIC SINGLE-THRESHOLD MODELS AND INFORMATION PROCESSING MODELS. IT IS CONCLUDED THAT THE DATA CAN ONLY BE ACCOUNTED FOR WITHIN THE FORMER CLASS OF MODELS BY ASSUMING A DIFFERENTIAL EFFECT OF THE STIMULUS AND RESPONSE BIAS. FOR THE CLASS OF INFORMATION PROCESSING MODELS, THE DATA REQUIRE THE ASSUMPTIONS THAT THERE IS EQUAL SENSITIVITY FOR WORDS OF ALL FREQUENCIES AND A LOWER CRITERION FOR MORE COMMON WORDS. THE RELATION BETWEEN THE 2 ACCEPTABLE MODELS MAKES IT APPARENT THAT THE NOTIONS OF STIMULUS EFFECT AND RESPONSE EFFECT ARE NOT COMPLETELY CLEAR. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Following extended training in a visual detection task, functions were determined for individual Ss relating latency of detection responses to number of redundant signal elements embedded in tachistoscopic displays of letters and to distance between signal elements. Latency proved invariant with respect to number of redundant signals and varied non-monotonically with distance. Of the several types of models considered in relation to previous studies, substantial support was forthcoming only for an independent stimulus sampling model. It is suggested that the detection method effects a relatively clear separation of the perceptual from the mnemonic aspects of the standard visual apprehension experiment, and that the sampling process may constitute only the first phase of a more general model which includes both parallel and serial information processing.
Article
Full-text available
How much can be seen in a single brief exposure? This is an important problem because our normal mode of seeing greatly resembles a sequence of brief exposures. In this report, the following experiments were conducted to study quantitatively the information that becomes available to an observer following a brief exposure. Lettered stimuli were chosen because these contain a relatively large amount of information per item and because these are the kind of stimuli that have been used by most previous investigators. The first two experiments are essentially control experiments; they attempt to confirm that immediate-memory for letters is independent of the parameters of stimulation, that it is an individual characteristic. In the third experiment the number of letters available immediately after the extinction of the stimulus is determined by means of a sampling (partial report) procedure described. The fourth experiment explores decay of available information with time. The fifth experiment examines some exposure parameters. In the sixth experiment a technique which fails to demonstrate a large amount of available information is investigated. The seventh experiment deals with the role of the historically important variable: order of report. It was found that each observer was able to report only a limited number of symbols correctly. For exposure durations from 15 to 500 msec, the average was slightly over four letters; stimuli having four or fewer letters were reported correctly nearly 100% of the time. It is also concluded that the high accuracy of partial report observed in the experiments does not depend on the order of report or on the position of letters on the stimulus, but rather it is shown to depend on the ability of the observer to read a visual image that persists for a fraction of a second after the stimulus has been turned off.
Article
Full-text available
Thesis (Ph. D.)--Harvard University, 1959. Includes bibliographical references (leaves 95-101). Microfilm.
Article
Sequences of 6 letters of the alphabet were visually presented for immediate recall to 387 subjects. Errors showed a systematic relationship to original stimuli. This is held to meet a requirement of the decay theory of immediate memory. The same letter vocabulary was used in a test in which subjects were required to identify the letters spoken against a white noise background. A highly significant correlation was found between letters which confused in the listening test, and letters which confused in recall. The role of neurological noise in recall is discussed in relation to these results. It is further argued that information theory is inadequate to explain the memory span, since the nature of the stimulus set, which can be defined quantitatively, as well as the information per item, is likely to be a determining factor.
Article
An information processing model of elementary human symbolic learning is given a precise statement as a computer program, called Elementary Perceiver and Memorizer (EPAM). The program simulates the behavior of subjects in experiments involving the rote learning of nonsense syllables. A discrimination net which grows is the basis of EPAM's associative memory. Fundamental information processes include processes for discrimination, discrimination learning, memorization, association using cues, and response retrieval with cues. Many well-known phenomena of rote learning are to be found in EPAM's experimental behavior, including some rather complex forgetting phenomena. EPAM is programmed in Information Processing Language V. H. A. Simon has described some current research in the simulation of human higher mental processes and has discussed some of the techniques and problems which have emerged from this research. The purpose of this paper is to place these general issues in the context of a particular problem by describing in detail a simulation of elementary human symbolic learning processes. The information processing model of mental functions employed is realized by a computer program called Elementary Perceiver and Memorizer (EPAM). The EPAM program is the precise statement of an information processing theory of verbal learning that provides an alternative to other verbal learning theories which have been proposed. It is the result of an attempt to state quite precisely a parsimonious and plausible mechanism sufficient to account for the rote learning of nonsense syllables. The critical evaluation of EPAM must ultimately depend not upon the interest which it may have as a learning machine, but upon its ability to explain and predict the phenomena of verbal learning. I should like to preface my discussion of the simulation of verbal learning with some brief remarks about the class of information processing models of which EPAM is a member. a. These are models of mental processes, not brain hardware. They are psychological models of mental function. No physiological or neurological assumptions are made, nor is any attempt made to explain information processes in terms of more elementary neural processes. b. These models conceive of the brain as an information processor with sense organs as input channels, effector organs as output devices, and with internal programs for testing, comparing, analyzing, rearranging, and storing information. c. The central processing mechanism is assumed to be serial; i.e., capable of doing only one (or a very few) things at a time. d. These models use as a basic unit the information symbol ; i.e., a pattern of bits which is assumed to be the brain's internal representation of environmental data. e. These models are essentially deterministic , not probabilistic. Random variables play no fundamental role in them.
Article
This paper describes an attempt to make use of machine learning or self-organizing processes in the design of a pattern-recognition program. The program starts not only without any knowledge of specific patterns to be input, but also without any operators for processing inputs. Operators are generated and refined by the program itself as a function of the problem space and of its own successes and failures in dealing with the problem space. Not only does the program learn information about different patterns, it also learns or constructs, in part at least, a secondary code appropriate for the analysis of the particular set of patterns input to it.
Article
Evaluates a class of models of human information processing made popular by D. E. Broadbent (see 33:5). A brief tachistoscopic display of 1 or 2 single letters, 4-letter common words, or 4-letter nonwords was immediately followed by a masking field along with 2 single-letter response alternatives chosen so as to minimize informational differences among the tasks. Giving 9 Ss response alternatives before the stimulus display as well as after it caused an impairment of performance. Performance on single words was clearly better than performance on single letters. The data suggested that the 1st stages of information processing are done in parallel, but scanning of the resultant highly processed information is done serially. (17 ref.) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Experiments were performed with 3 Ss that demonstrate some of the functional properties of short-term shortage in the visual system, its decay, readout, and erasure. Results indicate that the visual process involves a buffer shortage which includes an erasure mechanism that is local in character and tends to erase stored information when new information is put in. Storage time appears to be of the order of ¼ second; storage capacity is more difficult to assess. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Visual detection was studied in relation to displays of discrete elements, randomly selected consonant letters, distributed in random subsets of cells of a matrix, the S being required on each trial to indicate only which member of a predesignated pair of critical elements was present in a given display. Experimental variables were number of elements per display and number of redundant critical elements per display. Estimates of the number of elements effectively processed by a S during a 50 ms. exposure increased with display size, but not in the manner that would be expected if the S sampled a fixed proportion of the elements present in a display of given area. Test-retest data indicated substantial correlations over long intervals of time in the particular elements sampled by a S from a particular display. Efficiencies of detection with redundant critical elements were very close to those expected on the hypothesis of constant sample size over trials for any given display size and were relatively invariant with respect to distance between critical elements.
Article
A feature extraction model for the recognition of tachistoscopically presented alphanumeric material is presented. This model is applied to data from whole report, partial report, detection, and backward masking experiments. On the whole, the results are encouraging. In a final section the model which is presented as an extension of Bower's multicomponent model for memory is shown to be derivable as a limiting case of LaBerge's neutral element stimulus sampling model.
Article
MANY RECENT INVESTIGATORS HAVE STUDIED "RESPONSE BIAS" THEORIES OF THE PERCEPTION OF COMMON VS. UNCOMMON WORDS. 4 DIFFERENT CLASSES OF THEORY ARE DISTINGUISHED, AND IT IS DEMONSTRATED THAT 3 OF THEM ARE INCONSISTENT WITH PREVIOUSLY PUBLISHED AND WITH FRESH DATA. THE 4TH SENSE OF RESPONSE BIAS, HOWEVER, LEADS TO THE PREDICTION THAT BIAS ON CORRECT RESPONSES MAY BE GREATER THAN THAT ON ERRORS, AND IS VERY ACCURATELY CONSISTENT WITH THE DATA. THIS IS THE SENSE OF RESPONSE BIAS AS ANALOGOUS TO THE BIAS OF A CRITERION IN A STATISTICAL DECISION. (26 REF.)
Article
Visual duration thresholds were obtained for words differing in frequency of occurrence. S was required to give a prerecognition response for every exposure and to release a key when he was ready to respond. Responses immediately preceding correct recognition of both frequent and infrequent words tended to be words of high frequency. Those preceding infrequent words were more similar to the stimuli than were comparable responses to frequent words. Latency was greater when the response was an infrequent word. From Psyc Abstracts 36:04:4BC23N.
Article
The results of two experiments using 75 words demonstrated that the visual duration of a word, measured tachistoscopically by an ascending method of limits, was related linearly to the logarithm of the relative frequency with which that word appeared in the Thorndike-Lorge word counts. When certain physical characteristics and component letters of the words were corrected empirically the product-moment correlations for the two variables ranged from -.76 to -.83. The significance of the data in terms of an experimental analysis of language behavior is discussed.
Article
The experimental situation of concern is one in which a visual display comprising a number of discrete elements, in the present study randomly selected consonants, is presented tachistoscopically for an interval short enough to permit only a single fixation. Questions of primary interest are: (i) How much information in the display is reflected in selective responses by the subject; and (ii) is the information associated with individual elements of the display processed simultaneously or serially? Evidence bearing on the first question was obtained, firstly, by the classical procedure of verbal report, and secondly, by a discrimination procedure in which the subject was required to indicate only which member of a predesignated pair of elements was present in each display. (H. A. Taylor collaborated in these experiments.) Results from the second method yield estimates of transmitted information significantly larger than those from the first and agree quantitatively with those reported earlier by G. Sperling for a sampling procedure. Relative to the second question, a simultaneous sampling model, derived from statistical learning theory, can be rejected. A model assuming serial sampling of the display elements with random stopping provides a relatively good account of the data. When the estimate of the number of elements processed is plotted versus number of elements in the display, the curve yielded by the report procedure levels off beyond 5 to 6 elements while the curve for the detection procedure continues to rise. This disparity poses special problems for theories of retention loss following stimulation and immediate response.
The simulation of verbal learning behavior Computers and Thought Confusion matrices for graphic patterns ob-tained with a latency measure Visual duration threshold as a function of word prob-ability
  • E A Feigenbaum
  • E J Gibson
  • F Schapiro
  • A Yonas
  • D H Howes
  • R L Solomon
FEIGENBAUM, E. A. The simulation of verbal learning behavior. In E. A. Feigenbaum and J. Feldman (Eds.), Computers and Thought. New York: McGraw-Hill, 1963. GIBSON, E. J., SCHAPIRO, F., & YONAS, A. Confusion matrices for graphic patterns ob-tained with a latency measure. Unpublished manuscript, Cornell University. HOWES, D. H., & SOLOMON, R. L. Visual duration threshold as a function of word prob-ability. Journal of Experimental Psychology, 195 1, 41, 40 1-4 10. KINCAID, W. M. The combination of 2 X m contingency tables. Biometrics, 1962, 18, 224-228.
Cognitive psychology New York: Appleton-Century-Crofts, 1967. NEWBIGGING, P. L. The perceptual redintegration of frequent and infrequent words
  • U Neisser
NEISSER, U. Cognitive psychology. New York: Appleton-Century-Crofts, 1967. NEWBIGGING, P. L. The perceptual redintegration of frequent and infrequent words. Canadian Journal of Psychology, 1961, 15, 123-132.
Pandemonium: A paradigm for learning Pattern recognition
SELFRIDGE, 0. G. Pandemonium: A paradigm for learning. In L. Uhr (Ed.), Pattern recognition. New York: Wiley, 1966. SIEGEL, S. Nonparametric statistics for the behavioral sciences. New York: McGraw-Hill, 1956.
Word recognition latency as a function of word length. Paper presented at Midwestern Psychological Association Convention
  • M L Stewart
  • C T James
  • P B Gough
STEWART, M. L., JAMES, C. T., & GOUGH, P. B. Word recognition latency as a function of word length. Paper presented at Midwestern Psychological Association Convention, May, 1969.
Confusion matrices for graphic patterns obtained with a latency measure
  • E J Gibson
  • F Schapiro
  • A Yonas
GIBSON, E. J., SCHAPIRO, F., & YONAS, A. Confusion matrices for graphic patterns obtained with a latency measure. Unpublished manuscript, Cornell University.
The teacher's word book of 30,000 words
  • E L Lorge
THORNDIKE, E. L. & LORGE, I. The teacher's word book of 30,000 words. New York: Teachers College Press, 1944.
The teacher's word book of 30,000 words
  • E L Thorndike
  • I Lorge
THORNDIKE, E. L. & LORGE, I. The teacher's word book of 30,000 words. New York: Teachers College Press, 1944. UHR, L. Pattern recognition. In L. Uhr (Ed.), Pattern recognition. New York: Wiley, 1966.
Word recognition latency as a function of word length
  • Stewart