Previous tests of toddlers' phonological knowledge of familiar words using word recognition tasks have examined syllable onsets but not word-final consonants (codas). However, there are good reasons to suppose that children's knowledge of coda consonants might be less complete than their knowledge of onset consonants. To test this hypothesis, the present study examined 14- to 21-month-old children's knowledge of the phonological forms of familiar words by measuring their comprehension of correctly-pronounced and mispronounced instances of those words using a visual fixation task. Mispronunciations substituted onset or coda consonants. Adults were tested in the same task for comparison with children. Children and adults fixated named targets more upon hearing correct pronunciations than upon hearing mispronunciations, whether those mispronunciations involved the word's initial or final consonant. In addition, detailed analysis of the timing of adults' and children's eye movements provided clear evidence for incremental interpretation of the speech signal. Children's responses were slower and less accurate overall, but children and adults showed nearly identical temporal effects of the placement of phonological substitutions. The results demonstrate accurate encoding of consonants even in words children cannot yet say.
We used an off-line story continuation task and an online ERP reading task to investigate coreferential processing following sentences that portrayed transfer-of-possession events as either ongoing or completed, using imperfective and perfective verb aspect (e.g., Amanda was shifting/shifted some poker chips to Scott). The story continuation task demonstrated that people were more likely to begin continuations with references to the Goal than to the Source, but that perfective aspect strengthened this bias. In the ERP task we probed expectations for Source and Goal referents by employing pronouns that matched one of the referents in gender. The ERP results were consistent with the biases revealed in the story continuation task and demonstrate that the difference in Goal bias for the two forms of aspect was manifested differently in the brain. These results provide novel behavioral and neurocognitive evidence that verb aspect influences the construction of situation models during language comprehension.
Two eye movement experiments examined effects on syntactic reanalysis when the correct analysis was briefly entertained at an earlier point in the sentence. In Experiment 1, participants read sentences containing a noun phrase coordination/clausal coordination ambiguity, while in Experiment 2 they read sentences containing a subordinate clause object/main clause subject ambiguity. The critical conditions were designed to induce readers to construct the ultimately correct analysis just prior to being garden-pathed by the incorrect analysis. In both experiments, the earliest measures of the garden path effect were not modulated by this manipulation. However, there was significantly less regressive re-reading of the sentence in those conditions in which the correct analysis was likely to have been constructed, then abandoned, at an earlier point. These results suggest that a syntactic analysis that is abandoned in the course of processing a sentence is not lost altogether, and can be re-activated or retrieved from memory. Implications for models of initial syntactic analysis and reanalysis are discussed.
In two event-related potential (ERP) experiments, we determined to what extent Grice's maxim of informativeness as well as pragmatic ability contributes to the incremental build-up of sentence meaning, by examining the impact of underinformative versus informative scalar statements (e.g. "Some people have lungs/pets, and…") on the N400 event-related potential (ERP), an electrophysiological index of semantic processing. In Experiment 1, only pragmatically skilled participants (as indexed by the Autism Quotient Communication subscale) showed a larger N400 to underinformative statements. In Experiment 2, this effect disappeared when the critical words were unfocused so that the local underinformativeness went unnoticed (e.g., "Some people have lungs that…"). Our results suggest that, while pragmatic scalar meaning can incrementally contribute to sentence comprehension, this contribution is dependent on contextual factors, whether these are derived from individual pragmatic abilities or the overall experimental context.
Applying Bloch's law to visual word recognition research, both exposure duration of the prime and its luminance determine the prime's overall energy, and consequently determine the size of the priming effect. Nevertheless, experimenters using fast-priming paradigms traditionally focus only on the SOA between prime and target to reflect the absolute speed of cognitive processes under investigation. Some of the discrepancies in results regarding the time course of orthographic and phonological activation in word recognition research may be due to this factor. This hypothesis was examined by manipulating parametrically the luminance of the prime and its exposure duration, measuring their joint impact on masked repetition priming. The results show that small and non-significant priming effects can be more than tripled as a result of simply increasing luminance, when SOA is kept constant. Moreover, increased luminance may compensate for briefer exposure duration and vice versa.
Three experiments tested theories of syntactic representation by assessing stem exchange errors ("hates the record" -> "records the hate"). Previous research has shown that in stem exchanges, speakers pronounce intended nouns ("REcord") as verbs ("reCORD"), yielding syntactically well-formed utterances. By lexically based theories, resulting utterances are well-formed because speakers originally selected verbal forms ("reCORD"). By frame-based theories, resulting utterances are well-formed because independent syntactic frames compel conversion of intended nouns into verbs. Lexically based theories predict stem exchange errors should occur independently of syntactic context. Experiment 1 showed that speakers pronounced nouns as verbs only in utterances that required verbs; when utterances allowed nouns or verbs ("record and hate"), speakers pronounced nouns as nouns. Experiment 2 showed this was not an artifact of requiring specific utterance types. Experiment 3 ruled out a phonological influence over syntactic production. Consistent with frame-based theories, this evidence suggests that syntactic frames are abstract and independent.
The effects of pitch accenting on memory were investigated in three experiments. Participants listened to short recorded discourses that contained contrast sets with two items (e.g. British scientists and French scientists); a continuation specified one item from the set. Pitch accenting on the critical word in the continuation was manipulated between non-contrastive (H* in the ToBI system) and contrastive (L+H*). On subsequent recognition memory tests, the L+H* accent increased hits to correct statements and correct rejections of the contrast item (Experiments 1-3), but did not impair memory for other parts of the discourse (Experiment 2). L+H* also did not facilitate correct rejections of lures not in the contrast set (Experiment 3), indicating that contrastive accents do not simply strengthen the representation of the target item. These results suggest comprehenders use pitch accenting to encode and update information about multiple elements in a contrast set.
The present study explored when and how the top-down intention to speak influences the language production process. We did so by comparing the brain's electrical response for a variable known to affect lexical access, namely word frequency, during overt object naming and non-verbal object categorization. We found that during naming, the event-related brain potentials elicited for objects with low frequency names started to diverge from those with high frequency names as early as 152 ms after stimulus onset, while during non-verbal categorization the same frequency comparison appeared 200 ms later eliciting a qualitatively different brain response. Thus, only when participants had the conscious intention to name an object the brain rapidly engaged in lexical access. The data offer evidence that top-down intention to speak proactively facilitates the activation of words related to perceived objects.
Lexical access in language production, and particularly pathologies of lexical access, are often investigated by examining errors in picture naming and word repetition. In this article, we test a computational approach to lexical access, the two-step interactive model, by examining whether the model can quantitatively predict the repetition-error patterns of 65 aphasic subjects from their naming errors. The model's characterizations of the subjects' naming errors were taken from the companion paper to this one (Schwartz, Dell, N. Martin, Gahl & Sobel, 2006), and their repetition was predicted from the model on the assumption that naming involves two error prone steps, word and phonological retrieval, whereas repetition only creates errors in the second of these steps. A version of the model in which lexical-semantic and lexical-phonological connections could be independently lesioned was generally successful in predicting repetition for the aphasics. An analysis of the few cases in which model predictions were inaccurate revealed the role of input phonology in the repetition task.
Participants took part in two speech tests. In both tests, a model speaker produced vowel-consonant-vowels (VCVs) in which the initial vowel varied unpredictably in duration. In the simple response task, participants shadowed the initial vowel; when the model shifted to production of any of three CVs (/pa/, /ta/ or /ka/), participants produced a CV that they were assigned to say (one of /pa/, /ta/ or /ka/). In the choice task, participants shadowed the initial vowel; when the model shifted to a CV, participants shadowed that too. We found that, measured from the model's onset of closure for the consonant to the participant's closure onset, response times in the choice task exceeded those in the simple task by just 26 ms. This is much shorter than the canonical difference between simple and choice latencies [100-150 ms according to Luce (1986)] and is near the fastest simple times that Luce reports. The findings imply rapid access to articulatory speech information in the choice task. A second experiment found much longer choice times when the perception-production link for speech could not be exploited. A third experiment and an acoustic analysis verified that our measurement from closure in Experiment 1 provided a valid marker of speakers' onsets of consonant production. A final experiment showed that shadowing responses are imitations of the model's speech. We interpret the findings as evidence that listeners rapidly extract information about speakers' articulatory gestures.
Bilinguals who are fluent in American Sign Language (ASL) and English often produce code-blends - simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization times (Experiment 2) for code-blends versus ASL signs and English words produced alone. In production, code-blending did not slow lexical retrieval for ASL and actually facilitated access to low-frequency signs. However, code-blending delayed speech production because bimodal bilinguals synchronized English and ASL lexical onsets. In comprehension, code-blending speeded access to both languages. Bimodal bilinguals' ability to produce code-blends without any cost to ASL implies that the language system either has (or can develop) a mechanism for switching off competition to allow simultaneous production of close competitors. Code-blend facilitation effects during comprehension likely reflect cross-linguistic (and cross-modal) integration at the phonological and/or semantic levels. The absence of any consistent processing costs for code-blending illustrates a surprising limitation on dual-task costs and may explain why bimodal bilinguals code-blend more often than they code-switch.
The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source guessing probabilities to the perceived contingency between sources and item types. When they do not have a representation of a contingency, they base their guesses on prior schematic knowledge. The authors provide support for this account in two experiments with sources presenting information that was expected for one source and somewhat unexpected for another. Schema-relevant information about the sources was provided at the time of encoding. When contingency perception was impeded by dividing attention, participants showed schema-based guessing (Experiment 1). Manipulating source - item contingency also affected guessing (Experiment 2). When this contingency was schema-inconsistent, it superseded schema-based expectations and led to schema-inconsistent guessing.
Performance in the lexical decision task is highly dependent on decision criteria. These criteria can be influenced by speed versus accuracy instructions and word/nonword proportions. Experiment 1 showed that error responses speed up relative to correct responses under instructions to respond quickly. Experiment 2 showed that that responses to less probable stimuli are slower and less accurate than responses to more probable stimuli. The data from both experiments support the diffusion model for lexical decision (Ratcliff, Gomez, & McKoon, 2004). At the same time, the data provide evidence against the popular deadline model for lexical decision. The deadline model assumes that "nonword" responses are given only after the "word" response has timed out - consequently, the deadline model cannot account for the data from experimental conditions in which "nonword" responses are systematically faster than "word" responses.
Manipulating either list length (e.g., few vs. many study items) or encoding strength (e.g., one presentation vs. multiple presentations of each study item) produces a recognition mirror effect. A formal dual-process theory of recognition memory that accounts for the word-frequency mirror effect is extended to account for the list-length and strength-based mirror effects. According to this theory, the hit portions of these mirror effects result from differential ease of recollection-based recognition, and the false alarm portions result from differential reliance on familiarity-based recognition. This account yields predictions for participants' Remember and Know responses as a function of list length and encoding strength. Empirical data and model fits from four experiments support these predictions. The data also demonstrate a reliable list-length effect when several potential confounding factors are controlled, contributing to the debate regarding the effect of list length on recognition.
This paper presents the cue-based retrieval theory of parsing and reanalysis and illustrates how this account can accommodate a number of key results about parsing and reanalysis, including effects due to structure, distance, and type of structural change. Three offline experiments and one online experiment permit establishing the locus of these effects as due to properties of the initial parsing processes or to the repair mechanism. Specifically, the data reported here suggest that a structural factor specific to the operation of the parser, retrieval interference, affects attachment uniformly across ambiguous and unambiguous sentences and serves to create a limit on successful repair. In addition, these experiments suggest that distance of the head of an ambiguous phrase from its disambiguator affects repair processes-and not attachment processes-independently of the interference effect. These results are interpreted with respect to alternative models of reanalysis, which are contrasted with the cue-based retrieval account, which requires no distinct repair mechanism to account for the current results. A further contribution of this article is to suggest a statistical correction for individual variance in reading rates. Statistical analyses on individual subject data confirmed previous speculations regarding a possible increase in reading rates as subjects move through a sentence. While this individual variation limits fair comparisons of reading times in sentence regions that appear in non-identical serial positions, we demonstrate that such comparisons become meaningful when the appropriate regression analyses have been performed.
Optimizing learning over multiple retrieval opportunities requires a joint consideration of both the probability and the mnemonic value of a successful retrieval. Previous research has addressed this trade-off by manipulating the schedule of practice trials, suggesting that a pattern of increasingly long lags-"expanding retrieval practice"-may keep retrievals successful while gradually increasing their mnemonic value (Landauer & Bjork, 1978). Here we explore the trade-off issue further using an analogous manipulation of cue informativeness. After being given an initial presentation of English-Iñupiaq word pairs, participants received practice trials across which letters of the target word were either accumulated (AC), diminished (DC), or always fully present. Diminishing cues yielded the highest performance on a final test of cued recall. Additional analyses suggest that AC practice promotes potent (effortful) retrieval at the cost of success, and DC practice promotes successful retrieval at the cost of potency. Experiment 2 revealed that the negative effects of AC practice can be partly ameliorated by providing feedback after each practice trial.
There is mounting evidence that prosody facilitates grouping the speech stream into syntactically-relevant units (e.g., Hawthorne & Gerken, 2014; Soderstrom, Kemler Nelson, & Jusczyk, 2005). We ask whether prosody's role in syntax acquisition relates to its general acoustic salience or to the learner's acquired knowledge of correlations between prosody and syntax in her native language. English- and Japanese-acquiring 19-month-olds listened to sentences from an artificial grammar with non-native prosody (Japanese or English, respectively), then were tested on their ability to recognize prosodically-marked constituents when the constituents had moved to a new position in the sentence. Both groups were able to use non-native prosody to parse speech into cohesive, reorderable, syntactic constituent-like units. Comparison with Hawthorne & Gerken (2014), in which English-acquiring infants were tested on sentences with English prosody, suggests that 19-month-olds are equally adept at using native and non-native prosody for at least some types of learning tasks and, therefore, that prosody is useful in early syntactic segmentation because of its acoustic salience.
Recent research has demonstrated that knowledge of real-world eventsplays an important role inguiding online language comprehension. The present study addresses the scope of event knowledge activation during the course of comprehension, specifically investigating whether activation is limited to those knowledge elements that align with the local linguistic context.The present study addresses this issue by analyzing event-related brain potentials (ERPs) recorded as participants read brief scenariosdescribing typical real-world events. Experiment 1 demonstratesthat a contextually anomalous word elicits a reduced N400 if it is generally related to the described event, even when controlling for the degree of association of this word with individual words in the preceding context and with the expected continuation. Experiment 2 shows that this effect disappears when the discourse context is removed.These findings demonstrate that during the course of incremental comprehension, comprehenders activate general knowledge about the described event, even at points at which this knowledge would constitute an anomalous continuation of the linguistic stream. Generalized event knowledge activationcontributes to mental representations of described events, is immediately available to influence language processing, and likely drives linguistic expectancy generation.
When an elided constituent and its antecedent do not match syntactically, the presence of a word implying the non-actuality of the state of affairs described in the antecedent seems to improve the example (This information should be released but Gorbachev didn't. vs This information was released but Gorbachev didn't.) We model this effect in terms of Non-Actuality Implicatures (NAIs) conveyed by non-epistemic modals like should and other words such as want to and be eager to that imply non-actuality. We report three studies. A rating and interpretation study showed that such implicatures are drawn and that they improve the acceptability of mismatch ellipsis examples. An interpretation study showed that adding a NAI trigger to ambiguous examples increases the likelihood of choosing an antecedent from the NAI clause. An eye movement study shows that a NAI trigger also speeds online reading of the ellipsis clause. By introducing alternatives (the desired state of affairs vs. the actual state of affairs), the NAI trigger introduces a potential Question Under Discussion (QUD). Processing an ellipsis clause is easier, the processor is more confident of its analysis, when the ellipsis clause comments on the QUD.
Two story-telling experiments examine the process of choosing between pronouns and proper names in speaking. Such choices are traditionally attributed to speakers striving to make referring expressions maximally interpretable to addressees. The experiments revealed a novel effect: even when a pronoun would not be ambiguous, the presence of another character in the discourse decreased pronoun use and increased latencies to refer to the most prominent character in the discourse. In other words, speakers were more likely to call Minnie Minnie than shewhen Donald was also present. Even when the referent character appeared alone in the stimulus picture, the presence of another character in the preceding discourse reduced pronouns. Furthermore, pronoun use varied with features associated with the speaker's degree of focus on the preceding discourse (e.g., narrative style and disfluency). We attribute this effect to competition for attentional resources in the speaker's representation of the discourse.
Phonology provides a system by which a limited number of types of phonetic variation can signal communicative intentions at multiple levels of linguistic analysis. Because phonologies vary from language to language, acquiring the phonology of a language demands learning to attribute phonetic variation appropriately. Here, we studied the case of pitch-contour variation. In English, pitch contour does not differentiate words, but serves other functions, like marking yes/no questions and conveying emotions. We show that, in accordance with their phonology, English-speaking adults and two-year-olds do not interpret salient pitch contours as inherent to novel words. We taught participants a new word with consistent segmental and pitch characteristics, and then tested word recognition for trained and deviant pronunciations using an eyegaze-based procedure. Vowel-quality mispronunciations impaired recognition, but large changes in pitch contour did not. By age two, children already apply their knowledge of English phonology to interpret phonetic consistencies in their experience with words.
Spoken word recognition shows gradient sensitivity to within-category voice onset time (VOT), as predicted by several current models of spoken word recognition, including TRACE (McClelland & Elman, Cognitive Psychology, 1986). It remains unclear, however, whether this sensitivity is short-lived or whether it persists over multiple syllables. VOT continua were synthesized for pairs of words like barricade and parakeet, which differ in the voicing of their initial phoneme, but otherwise overlap for at least four phonemes, creating an opportunity for "lexical garden-paths" when listeners encounter the phonemic information consistent with only one member of the pair. Simulations established that phoneme-level inhibition in TRACE eliminates sensitivity to VOT too rapidly to influence recovery. However, in two Visual World experiments, look-contingent and response-contingent analyses demonstrated effects of word initial VOT on lexical garden-path recovery. These results are inconsistent with inhibition at the phoneme level and support models of spoken word recognition in which sub-phonetic detail is preserved throughout the processing system.
The effects of aging on response time were examined in a recognition memory experiment with young, college age subjects and older, 60-75 year old subjects. The older subjects were slower than the young subjects but almost as accurate. Ratcliff's (1978) diffusion model was fit to the data and it provided a good account of response times, their distributions, and accuracy values. The fits showed a 100 ms slowing of the nondecision components of response time for older subjects relative to young subjects, and roughly equal response criteria settings with accuracy instructions but more conservative settings for the older subjects with speed instructions. In the diffusion model, the decision process is driven by the rate of accumulation of evidence from the stimulus. We found that the rate of accumulation for older subjects was a non-significant 7% lower than the rate for young subjects, indicating that the output from recognition memory entering the decision process was not significantly worse for the older subjects. The results are compared to those obtained from letter discrimination, brightness discrimination, and signal detection-like tasks.
The "weaker links" hypothesis proposes that bilinguals are disadvantaged relative to monolinguals on speaking tasks because they divide frequency-of-use between two languages. To test this proposal we contrasted the effects of increased word use associated with monolingualism, language dominance, and increased age on picture naming times. In two experiments, younger and older bilinguals and monolinguals named pictures with high- or low-frequency names in English and (if bilingual) also in Spanish. In Experiment 1, slowing related to bilingualism and language dominance was greater for producing low- than high-frequency names. In Experiment 2, slowing related to aging was greater for producing low-frequency names in the dominant language, but when speaking the nondominant language, increased age attenuated frequency effects and age-related slowing was limited exclusively to high-frequency names. These results challenge competition based accounts of bilingual disadvantages in language production, and illustrate how between-group processing differences may emerge from cognitive mechanisms general to all speakers.
Across most languages, verbs produced by agrammatic aphasic individuals are frequently marked by syntactically and semantically inappropriate inflectional affixes, such as Last night, I walking home. As per language production models, verb inflection errors in English agrammatism could arise from three potential sources: encoding the verbs' morphology based on temporal information at the conceptual level, accessing syntactic well-formedness constraints of verbal morphology, and encoding morphophonological form. We investigate these aspects of encoding verb inflections in agrammatic aphasia. Using three sentence completion experiments, it was demonstrated that production of verb inflections was impaired whenever temporal reference was involved; while morphological complexity and syntactic constraints were less likely to be the source of verb inflection errors in agrammatism. These findings are discussed in relation to current language production models.
Three experiments investigated how font emphasis influences reading and remembering discourse. Although past work suggests that contrastive pitch contours benefit memory by promoting encoding of salient alternatives, it is unclear both whether this effect generalizes to other forms of linguistic prominence and how the set of alternatives is constrained. Participants read discourses in which some true propositions had salient alternatives (e.g., British scientists found the endangered monkey when the discourse also mentioned French scientists) and completed a recognition memory test. In Experiments 1 and 2, font emphasis in the initial presentation increased participants' ability to later reject false statements about salient alternatives but not about unmentioned items (e.g., Portuguese scientists). In Experiment 3, font emphasis helped reject false statements about plausible alternatives, but not about less plausible alternatives that were nevertheless established in the discourse. These results suggest readers encode a narrow set of only those alternatives plausible in the particular discourse. They also indicate that multiple manipulations of linguistic prominence, not just prosody, can lead to consideration of alternatives.
Two experiments examined the dynamics of lexical activation in spoken-word recognition. In both, the key materials were pairs of onset-matched picturable nouns varying in frequency. Pictures associated with these words, plus two distractor pictures were displayed. A gating task, in which participants identified the picture associated with gradually lengthening fragments of spoken words, examined the availability of discriminating cues in the speech waveforms for these pairs. There was a clear frequency bias in participants' responses to short, ambiguous fragments, followed by a temporal window in which discriminating information gradually became available. A visual-world experiment examined speech contingent eye-movements. Fixation analyses suggested that frequency influences lexical competition well beyond the point in the speech signal at which the spoken word has been fully discriminated from its competitor (as identified using gating). Taken together, these data support models in which the processing dynamics of lexical activation are a limiting factor on recognition speed, over and above the temporal unfolding of the speech signal.
Two event-related potential experiments investigated the effects of syntactic and semantic context information on the processing of noun/verb (NV) homographs (e.g., park). Experiment 1 embedded NV-homographs and matched unambiguous words in contexts that provided only syntactic cues or both syntactic and semantic constraints. Replicating prior work, when only syntactic information was available NV-homographs elicited sustained frontal negativity relative to unambiguous words. Semantic constraints eliminated this frontal ambiguity effect. Semantic constraints also reduced N400 amplitudes, but less so for homographs than unambiguous words. Experiment 2 showed that this reduced N400 facilitation was limited to cases in which the semantic context picks out a nondominant meaning, likely reflecting the semantic mismatch between the context and residual, automatic activation of the contextually-inappropriate dominant sense. Overall, the findings suggest that ambiguity resolution in context involves the interplay between multiple neural networks, some involving more automatic semantic processing mechanisms and others involving top-down control mechanisms.
In studies of anaphor comprehension, the capacity for recognizing a noun in a sentence decreases following the resolution of a repeated-noun anaphor (Gernsbacher, 1989). In studies of recognition memory, the capacity for recognizing a noun in a scrambled sentence decreases following the recognition that another noun has occurred before in the scrambled sentence (Dopkins & Ngo, 2002). The results of the present study suggest that these two phenomena reflect the same recognition memory process. The results suggest further that this is not because participants in studies of anaphor comprehension ignore the discourse properties of the stimulus materials and treat them as lists of words upon which memory tests are to be given. These results suggest that recognition processes play a role in anaphor comprehension and that such processes are in part the means by which repeated-noun anaphors are identified as such.
We aimed to determine whether semantic relatedness between an incoming word and its preceding context can override expectations based on two types of stored knowledge: real-world knowledge about the specific events and states conveyed by a verb, and the verb's broader selection restrictions on the animacy of its argument. We recorded event-related potentials on post-verbal Agent arguments as participants read and made plausibility judgments about passive English sentences. The N400 evoked by incoming animate Agent arguments that violated expectations based on real-world event/state knowledge, was strongly attenuated when they were semantically related to the context. In contrast, semantic relatedness did not modulate the N400 evoked by inanimate Agent arguments that violated the preceding verb's animacy selection restrictions. These findings suggest that, under these task and experimental conditions, semantic relatedness can facilitate processing of post-verbal animate arguments that violate specific expectations based on real-world event/state knowledge, but only when the semantic features of these arguments match the coarser-grained animacy restrictions of the verb. Animacy selection restriction violations also evoked a P600 effect, which was not modulated by semantic relatedness, suggesting that it was triggered by propositional impossibility. Together, these data indicate that the brain distinguishes between real-world event/state knowledge and animacy-based selection restrictions during online processing.
Traditional syntactic accounts of verb phrase ellipsis (e.g. "Jason laughed. Sam did [ ] too.") categorize as ungrammatical many sentences that language users find acceptable (they "undergenerate"); semantic accounts overgenerate. We propose that a processing theory, together with a syntactic account, does a better job of describing and explaining the data on verb phrase-ellipsis. Five acceptability judgment experiments supported a "VP recycling hypothesis," which claims that when a syntactically-matching antecedent is not available, the listener/reader creates one using the materials at hand. Experiments 1 and 2 used verb phrase ellipsis sentences with antecedents ranging from perfect (a verb phrase in matrix verb phrase position) to impossible (a verb phrase containing only a deverbal word). Experiments 3 and 4 contrasted antecedents in verbal versus nominal gerund subjects. Experiment 5 explored the possibility that speakers are particularly likely to go beyond the grammar and produce elided constituents without perfect matching antecedents when the antecedent needed is less marked than the antecedent actually produced. This experiment contrasted active (unmarked) and passive antecedents to show that readers seem to honor such a tendency.
This paper presents findings from two eye-tracking studies designed to investigate the role of metrical prosody in silent reading. In Experiment 1, participants read stress-alternating noun-verb or noun-adjective homographs (e.g. PREsent, preSENT) embedded in limericks, such that the lexical stress of the homograph, as determined by context, either matched or mismatched the metrical pattern of the limerick. The results demonstrated a reading cost when readers encountered a mismatch between the predicted and actual stress pattern of the word. Experiment 2 demonstrated a similar cost of a mismatch in stress patterns in a context where the metrical constraint was mediated by lexical category rather than by explicit meter. Both experiments demonstrated that readers are slower to read words when their stress pattern does not conform to expectations. The data from these two eye-tracking experiments provide some of the first on-line evidence that metrical information is part of the default representation of a word during silent reading.
Three eye-tracking experiments investigated the role of pitch accents during online discourse comprehension. Participants faced a grid with ornaments, and followed pre-recorded instructions such as "Next, hang the blue ball" to decorate holiday trees. Experiment 1 demonstrated a processing advantage for felicitous as compared to infelicitous uses of L+H* on the adjective noun pair (e.g. blue ball followed by GREEN ball vs. green BALL). Experiment 2 confirmed that L+H* on a contrastive adjective led to 'anticipatory' fixations, and demonstrated a "garden path" effect for infelicitous L+H* in sequences with no discourse contrast (e.g. blue angel followed by GREEN ball resulted in erroneous fixations to the cell of angels). Experiment 3 examined listeners' sensitivity to coherence between pitch accents assigned to discourse markers such as 'And then,' and those assigned to the target object noun phrase.
Four experiments examined listeners' segmentation of ambiguous schwa-initial sequences (e.g., a long vs. along) in casual speech, where acoustic cues can be unclear, possibly increasing reliance on contextual information to resolve the ambiguity. In Experiment 1, acoustic analyses of talkers' productions showed that the one-word and two-word versions were produced almost identically, regardless of the preceding sentential context (biased or neutral). These tokens were then used in three listening experiments, whose results confirmed the lack of local acoustic cues for disambiguating the interpretation, and the dominance of sentential context in parsing. Findings speak to the H&H theory of speech production (Lindblom, 1990), demonstrate that context alone guides parsing when acoustic cues to word boundaries are absent, and demonstrate how knowledge of how talkers speak can contribute to an understanding of how words are segmented.
This study presents the first direct comparison of immediate serial recall in semantic dementia (SD) and transcortical sensory aphasia (TSA). Previous studies of the effect of semantic impairment on verbal short-term memory (STM) have led to important theoretical advances. However, different conclusions have been drawn from these two groups. This research aimed to explain these inconsistencies. We observed (a) qualitative differences between SD and TSA in the nature of the verbal STM impairment and (b) considerable variation within the TSA group. The SD and TSA patients all had poor semantic processing and good phonology. Reflecting this, both groups remained sensitive to phonological similarity and showed a reduced effect of lexicality in immediate serial recall. The SD patients showed normal serial position effects; in contrast, the TSA patients had poor recall of the initial list items and exhibited large recency effects on longer lists. The error patterns of the two groups differed: the SD patients made numerous phoneme migration errors whereas the TSA group were more likely to produce entire words in the wrong order, often initiating recall with terminal list items. The SD cases also showed somewhat larger effects of word frequency and imageability. We propose that these contrasting performance patterns are explicable in terms of the nature of the underlying semantic impairment. SD is associated with anterior lobe atrophy and produces degradation of semantic knowledge - this is more striking for less frequent/imageable items, accentuating the effects of these lexical/semantic variables in STM. SD patients frequently recombine the phonemes of different list items due to the reduced semantic constraint upon phonology (semantic binding: Patterson et al., 1994). In contrast, the semantic impairment in TSA follows frontal or temporoparietal lesions and is associated with poor executive control of semantic processing (deregulated semantic cognition: Jefferies and Lambon Ralph, 2006), explaining why these patients are liable to recall entire words out of serial order.
This paper investigates the cognitive processes underlying picture naming and auditory word repetition. In the 2-step model of lexical access, both the semantic and phonological steps are involved in naming, but the former has no role in repetition. Assuming recognition of the to-be-repeated word, repetition could consist of retrieving the word's output phonemes from the lexicon (the lexical-route model), retrieving the output phonology directly from input phonology (the nonlexical-route model) or employing both routes together (the summation dual-route model). We tested these accounts by comparing the size of the word frequency effect (an index of lexical retrieval) in naming and repetition data from 59 aphasic patients with simulations of naming and repetition models. The magnitude of the frequency effect (and the influence of other lexical variables) was found to be comparable in naming and repetition, and equally large for both the lexical and summation dual-route models. However, only the dual-route model was fully consistent with data from patients, suggesting that nonlexical input is added on top of a fully-utilized lexical route.
Two experiments are reported which examine how manipulations of visual attention affect speakers' linguistic choices regarding word order, verb use and syntactic structure when describing simple pictured scenes. Experiment 1 presented participants with scenes designed to elicit the use of a perspective predicate (The man chases the dog/The dog flees from the man) or a conjoined noun phrase sentential Subject (A cat and a dog/A dog and a cat). Gaze was directed to a particular scene character by way of an attention-capture manipulation. Attention capture increased the likelihood that this character would be the sentential Subject and altered the choice of perspective verb or word order within conjoined NP Subjects accordingly. Experiment 2 extended these results to word order choice within Active versus Passive structures (The girl is kicking the boy/The boy is being kicked by the girl) and symmetrical predicates (The girl is meeting the boy/The boy is meeting the girl). Experiment 2 also found that early endogenous shifts in attention influence word order choices. These findings indicate a reliable relationship between initial looking patterns and speaking patterns, reflecting considerable parallelism between the on-line apprehension of events and the on-line construction of descriptive utterances.
This research tests whether comprehenders use their knowledge of typical events in real time to process verbal arguments. In self-paced reading and event-related brain potential (ERP) experiments, we used materials in which the likelihood of a specific patient noun (brakes or spelling) depended on the combination of an agent and verb (mechanic checked vs. journalist checked). Reading times were shorter at the word directly following the patient for the congruent than the incongruent items. Differential N400s were found earlier, immediately at the patient. Norming studies ruled out any account of these results based on direct relations between the agent and patient. Thus, comprehenders dynamically combine information about real-world events based on intrasentential agents and verbs, and this combination then rapidly influences online sentence interpretation.
Four experiments revealed arousal-enhanced location memory for pictures. After an incidental encoding task, participants were more likely to remember the locations of positive and negative arousing pictures than the locations of non-arousing pictures, indicating better binding of location to picture. This arousal-enhanced binding effect did not have a cost for the binding of nearby pictures to their locations. Thus, arousal can enhance binding of an arousing picture's content to its location without interfering with picture-location binding for nearby pictures. In addition, arousal-enhanced picture-location memory binding is not just a side effect of enhanced memory for the picture itself, as it occurs both when recognition memory is good and when it is poor.
Three experiments using online processing measures explored whether native and non-native Spanish-speaking adults use gender-marked articles to identify referents of target nouns more rapidly, as shown previously with 3-year-old children learning Spanish as L1 (Lew-Williams & Fernald, 2007). In Experiment 1, participants viewed familiar objects with names of either the same or different grammatical gender while listening to Spanish sentences referring to one object. L1 adults, like L1 children, oriented to the target more rapidly on different-gender trials, when the article was informative about noun identity; however, L2 adults did not. Experiments 2 and 3 controlled for frequency of exposure to article-noun pairs by using novel nouns. L2 adults could not exploit gender information when different article-noun pairs were used in teaching and testing. Experience-related factors may influence how L1 adults and children and L2 adults-who learned Spanish at different ages and in different settings-use grammatical gender in realtime processing.
Four experiments investigated the novel issue of learning to accommodate the coarticulated nature of speech. Experiment 1 established a co-articulatory mismatch effect for a set of vowel-consonant (VC) syllables (reaction times were faster for co-articulation matching than for mismatching stimuli). A rhyme judgment training task on words (Experiment 2) or VC stimuli (Experiment 3) with mismatching information was followed by a phoneme monitoring task on a set of VC stimuli; training and test stimuli contained physically identical (same condition) or new (different condition) mismatching coarticulatory information (along with a set containing matching coarticulatory information). A third group received no training. A coarticulatory mismatch effect was found without training but not when the same mismatching tokens were used at training and test. Both word (Experiment 2) and syllable (Experiment 3) training stimuli eliminated the mismatch effect; overall reaction times were somewhat slower when the training stimuli were words. Perceptual learning generalized to new tokens only when the acoustic manifestation of the critical co-articulatory information in the training stimuli was sufficiently large (Experiments 3 and 4). The results are discussed in terms of speech processing and perceptual learning in speech perception.
In two experiments, participants studied word pairs and later discriminated old (intact) word pairs from foils, including recombined word pairs and pairs including one or two previously unstudied words. Rather than making old/new memory judgments, they chose one of five responses: (1) Old-Old (original), (2) Old-Old (rearranged), (3) Old-New, (4) New-Old, (5) New-New. To tease apart the effects of item familiarity from those of associative strength, we varied both how many times a specific word-pair was repeated (1 or 5) and how many different word pairs were associated with a given word (1 or 5). Participants could discriminate associative information from item information such that they recognized which word of a foil was new, or whether both were new, as well as discriminating recombined studied words from original pairings. The error and latency data support the view that item and associative information are stored as distinct memory representations and make separate contributions at retrieval.
Verbal-to-spatial associations in working memory may index a core capacity for abstract information limited in the amount concurrently retained. However, what look like associative, abstract representations could instead reflect verbal and spatial codes held separately and then used in parallel. We investigated this issue in two experiments on memory for associations between names and spatial locations, with or without a 1-to-1 correspondence between the two. Participants (children 9-10 and 12-13 years old and college students) saw series of names presented at spatial locations occupied by house icons and indicated the location at which a probe name had appeared. Only adults benefited from 1-to-1 correspondence between names and locations, and this benefit was eliminated by articulatory suppression. We maintain that the 1-to-1 benefit stems from verbal and spatial codes used in parallel. Without rehearsal, performance appears to index working memory for abstract, cross-modal information. Correlations with other tasks suggest that it is an excellent measure of working memory capacity.
Previous studies demonstrated that statistical properties of adult generated free associates predict the order of early noun learning. We investigate an explanation for this phenomenon that we call the associative structure of language: early word learning may be driven in part by contextual diversity in the learning environment, with contextual diversity in caregiver speech correlating with the cue-target structure in adult free association norms. To test this, we examined the co-occurrence of words in caregiver speech from the CHILDES database and found that a word's contextual diversity-the number of unique word types a word co-occurs with in caregiver speech-predicted the order of early word learning and was highly correlated with the number of unique associative cues for a given target word in adult free association norms. The associative structure of language was further supported by an analysis of the longitudinal development of early semantic networks (from 16 to 30 months) using contextual co-occurrence. This analysis supported two growth processes: The lure of the associates, in which the earliest learned words have more connections with known words, and preferential acquisition, in which the earliest learned words are the most contextually diverse in the learning environment. We further discuss the impact of word class (nouns, verbs, etc.) on these results.
In three experiments, we evaluated remembering and intentional forgetting of attitude statements that were either congruent or incongruent with participants' own political attitudes. In Experiment 1, significant directed forgetting was obtained for incongruent statements, but not for congruent statements. In addition, in the remember group, recall was better for incongruent statements than congruent statements. To explain these findings, we propose a contextual competition at retrieval hypothesis, according to which incongruent statements become more strongly associated with their episodic context during encoding than do congruent statements. At the time of retrieval, incongruent statements compete with congruent statements due to the greater amount of contextual information stored in their memory trace. We tested this hypothesis in Experiment 2 by studying free recall of congruent and incongruent statements in a mixed-pure list design. In Experiment 3, memory for incongruent and congruent statements was tested under recognition test conditions that varied in terms of how much direct retrieval of contextual details they required. Overall, the results supported the contextual competition hypothesis, and they indicate the importance of context strength in both the remembering and intentional forgetting of attitude information.