By 27 months of age, toddlers hearing a novel verb in transitive syntax are able to (1) establish an initial representation for the verb based on its syntactic properties alone, even in the absence of a relevant visual scene, and (2) retrieve this representation later when a candidate causative referent comes into view. This ability is important considering that over 60% of the verbs that mothers produce in conversations with their children refer to events that are not currently observable. Here, we advance this finding in two ways. First, we demonstrate the same ability in 21-month-olds, who do not yet show mastery of transitive structures in their own productions. Second, we use analyses of toddlers' eye gaze to explore the time-course with which they process the novel verb and assign its referent when candidate scenes become available. These results (1) provide the first evidence that 21-month-olds establish a representation of a novel verb's meaning from syntax alone, and (2) establish that they process and assign meaning to novel verbs with a similar time-course to that for novel nouns. The findings are thus relevant to our understanding of both word learning and lexical processing of novel words.
The present study tests whether listeners use F0, duration, or some combination of the two to identify the presence of an accented word in a short discourse. Participants' eye movements to previously mentioned and new objects were monitored as participants listened to instructions to move objects in a display. The name of the target object on critical trials was resynthesized from naturally-produced utterances so that it had either high or low F0 and either long or short duration. Fixations to the new object were highest when there was a steep rise in F0. Fixations to the previously mentioned object were highest when there was a steep drop in F0. These results suggest that listeners use F0 slope to make decisions about the presence of an accent, and that F0 and duration by themselves do not solely determine accent interpretation.
Previous work has found that listeners prefer to attach ambiguous syntactic constituents to nouns produced with a pitch accent (Schafer et al., 1996). This study examines what factors underlie previously established accent attachment effects by testing whether these effects are driven by a preference to attach syntactic constituents to new or important information (the Syntax Hypothesis) or whether there is a bias to respond to post-sentence probe questions with an accented word (the Salience Hypothesis). One of the predictions of the Salience Hypothesis is that selection of accented words should be greater when a sentence is complex and processing resources are limited. The results from the experiments presented here show that the probability of listeners' selecting accented words when asked about the interpretation of a relative clause varies with sentence type: listeners selected accented words more frequently in long sentences than in short sentences, consistent with the predictions of the Salience Hypothesis. Furthermore, Experiment 4 demonstrates that listeners are more likely to respond to post-sentence questions with accented words than with non-accented words, even when no ambiguity is present, and even when the response results in an incorrect answer. These findings suggest that accent-driven attachment effects found in earlier studies reflect a post-sentence selection process rather than a syntactic processing mechanism.
Judgments of linguistic unacceptability may theoretically arise from either grammatical deviance or significant processing difficulty. Acceptability data are thus naturally ambiguous in theories that explicitly distinguish formal and functional constraints. Here, we consider this source ambiguity problem in the context of Superiority effects: the dispreference for ordering a wh-phrase in front of a syntactically "superior" wh-phrase in multiple wh-questions, e.g. What did who buy? More specifically, we consider the acceptability contrast between such examples and so-called D-linked examples, e.g. Which toys did which parents buy? Evidence from acceptability and self-paced reading experiments demonstrates that (i) judgments and processing times for Superiority violations vary in parallel, as determined by the kind of wh-phrases they contain, (ii) judgments increase with exposure while processing times decrease, (iii) reading times are highly predictive of acceptability judgments for the same items, and (iv) the effects of the complexity of the wh-phrases combine in both acceptability judgments and reading times. This evidence supports the conclusion that D-linking effects are likely reducible to independently motivated cognitive mechanisms whose effects emerge in a wide range of sentence contexts. This in turn suggests that Superiority effects, in general, may owe their character to differential processing difficulty.
Previous evidence suggests that when speakers produce sentences from memory or as picture descriptions, their choices of sentence structure are influenced by how easy it is to retrieve sentence material (accessibility). Three experiments assessed whether this pattern holds in naturalistic, interactive dialogue. Pairs of speakers took turns asking each other questions, the responses to which allowed mention of an optional "that" before either repeated (accessible) or unrepeated (inaccessible) material. Speakers' "that" mention was not sensitive to the repetition (accessibility) manipulation. Instead, "that" mention was sensitive to social factors: Speakers said "that" more when adopting another's perspective rather than one's own, and tended to say "that" more when attributing emotions to oneself rather than another. A fourth experiment confirmed that in a memory task, the original pattern is observed. These results suggest that "that" mention is sensitive to the cognitive forces that operate within a production task; in dialogue settings, social factors were especially influential.
How are the meanings of concepts represented and processed? We present a cognitive model of conceptual representations and processing—the Conceptual Structure Account (CSA; Tyler & Moss, 2001115.
Tyler, L. K., & Moss, H. E. (2001). Towards a distributed account of conceptual knowledge. Trends in Cognitive Sciences, 5, 244–252. View all references)—as an example of a distributed, feature-based approach. In the first section, we describe the CSA and evaluate relevant neuropsychological and experimental behavioural data. We discuss studies using linguistic and nonlinguistic stimuli, which are both presumed to access the same conceptual system. We then take the CSA as a framework for hypothesising how conceptual knowledge is represented and processed in the brain. This neurocognitive approach attempts to integrate the distributed feature-based characteristics of the CSA with a distributed and feature-based model of sensory object processing. Based on a review of relevant functional imaging and neuropsychological data, we argue that distributed accounts of feature-based representations have considerable explanatory power, and that a cognitive model of conceptual representations is needed to understand their neural bases.
The DIVA model of speech production provides a computationally and neuroanatomically explicit account of the network of brain regions involved in speech acquisition and production. An overview of the model is provided along with descriptions of the computations performed in the different brain regions represented in the model. The latest version of the model, which contains a new right-lateralized feedback control map in ventral premotor cortex, will be described, and experimental results that motivated this new model component will be discussed. Application of the model to the study and treatment of communication disorders will also be briefly described.
The goal of this research was to disentangle effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighborhood density, the number of phonologically similar words, in lexical acquisition. Two word learning experiments were conducted with 4-year-old children. Experiment 1 manipulated phonotactic probability while holding neighborhood density and referent characteristics constant. Experiment 2 manipulated neighborhood density while holding phonotactic probability and referent characteristics constant. Learning was tested at two time points (immediate vs. retention) in both a naming and referent identification task, although only data from the referent identification task were analyzed due to poor performance in the naming task. Results showed that children were more accurate learning rare sound sequences than common sound sequences and this was consistent across time points. In contrast, the effect of neighborhood density varied by time. Children were more accurate learning sparse sound sequences than dense sound sequences at the immediate test point but accuracy for dense sound sequences significantly improved by the retention test without further training. It was hypothesized that phonotactic probability and neighborhood density influenced different cognitive processes that underlie lexical acquisition.
This paper investigated children's ability to use syntactic structures to infer semantic information. The particular syntax-semantics link examined was the one between transitivity (transitive/intransitive structures) and telicity (telic/atelic perspectives; that is, boundedness). Although transitivity is an important syntactic reflex of telicity, it is neither necessary nor sufficient for predicting a telicity value; it is therefore a weak cue for telicity semantics. Nevertheless, children do make use of it. Experiment 1 used a match-to-sample task and found that 3-year children could use transitivity information to guide their interpretations of telicity. Experiment 2 used a preferential looking task with 2-year-old children and similarly found that these children could successfully use transitivity as a cue to telicity. Children in both experiments succeeded with both causal and directed-motion events, suggesting that telicity judgments are not tied to any one event type. These results are discussed in the context of other semantic elements that children can link to transitivity, and taken together, are argued to support a largely inferential link between transitivity and telicity.
The verbal fluency task is a widely used neuropsychological test of word retrieval efficiency. Both category fluency (e.g., list animals) and letter fluency (e.g., list words that begin with F) place demands on semantic memory and executive control functions. However letter fluency places greater demands on executive control than category fluency, making this task well-suited to investigating potential bilingual advantages in word retrieval. Here we report analyses on category and letter fluency for bilinguals and monolinguals at four ages, namely, 7-year-olds, 10-year-olds, young adults, and older adults. Three main findings emerged: 1) verbal fluency performance improved from childhood to young adulthood and remained relatively stable in late adulthood; 2) beginning at 10-years-old, the executive control requirements for letter fluency were less effortful for bilinguals than monolinguals, with a robust bilingual advantage on this task emerging in adulthood; 3) an interaction among factors showed that category fluency performance was influenced by both age and vocabulary knowledge but letter fluency performance was influenced by bilingual status.
Three experiments investigated factors contributing to syntactic priming during on-line comprehension. In all of the experiments, a prime sentence containing a reduced relative clause was presented prior to a target sentence that contained the same structure. Previous studies have shown that people respond more quickly when a syntactically related prime sentence immediately precedes a target. In the current study, ERP and eyetracking measures were used to assess whether priming in sentence comprehension persists when one or more unrelated filler sentences appear between the prime and the target. In experiment 1, a reduced P600 was found to target sentences both when there were no intervening unrelated fillers, and when there was one unrelated filler between the prime and the target. Thus, processing the prime sentence facilitated processing of the syntactic form of the target sentence. Experiments 2 and 3, eye-tracking experiments, showed that target sentence processing was facilitated when three filler sentences intervened between the prime and the target. These experiments show that priming effects in comprehension can be observed when unrelated material appears after a prime sentence and before the target. We interpret the results with respect to residual activation and implicit learning accounts of priming.
In this study, children, young adults and elderly adults were tested in production and comprehension tasks assessing referential choice. Our aims were (1) to determine whether speakers egocentrically base their referential choice on the preceding linguistic discourse or also take into account the perspective of a hypothetical listener and (2) whether the possible impact of perspective taking on referential choice changes with increasing age, with its associated changes in cognitive capacity. In the production task, participants described picture-based stories featuring two characters of the same gender, making it necessary to use unambiguous forms; in the comprehension task, participants interpreted potentially ambiguous pronouns at the end of similar orally presented stories. Young adults (aged 18-35) were highly sensitive to the informational needs of hypothetical conversational partners in their production and comprehension of referring expressions. In contrast, children (aged 4-7) did not take into account possible conversational partners and tended to use pronouns for all given referents, leading to the production of ambiguous pronouns that are unrecoverable for a listener. This was mirrored in the outcome of the comprehension task, where children were insensitive to the shift of discourse topic marked by the speaker. The elderly adults (aged 69-87) behaved differently from both young adults and children. They showed a clear sensitivity to the other person's perspective in both production and comprehension, but appeared to lack the necessary cognitive capacities to keep track of the prominence of discourse referents, producing more potentially ambiguous pronouns than young adults, though fewer than children. In conclusion then, referential choice seems to depend on perspective taking in language, which develops with increasing linguistic experience and cognitive capacity, but also on the ability to keep track of the prominence of discourse referents, which is gradually lost with older age.
Two eyetracking experiments examined the real-time production of verb arguments and adjuncts in healthy and agrammatic aphasic speakers. Verb argument structure has been suggested to play an important role during grammatical encoding (Bock & Levelt, 1994) and in speech deficits of agrammatic aphasic speakers (Thompson, 2003). However, little is known about how adjuncts are processed during sentence production. The present experiments measured eye movements while speakers were producing sentences with a goal argument (e.g., the mother is applying lotion to the baby) and a beneficiary adjunct phrase (e.g., the mother is choosing lotion for the baby) using a set of computer-displayed written words. Results showed that the sentence production system experiences greater processing cost for producing adjuncts than verb arguments and this distinction is preserved even after brain-damage. In Experiment 1, healthy young speakers showed greater gaze durations and gaze shifts for adjuncts as compared to arguments. The same patterns were found in agrammatic and older speakers in Experiment 2. Interestingly, the three groups of speakers showed different time courses for encoding adjuncts: young speakers showed greater processing cost for adjuncts during speech, consistent with incremental production (Kempen & Hoenkamp, 1987). Older speakers showed this difference both before speech onset and during speech, while aphasic speakers appeared to preplan adjuncts before speech onset. These findings suggest that the degree of incrementality may be affected by speakers' linguistic capacity.
Sixteen-year-olds with specific language impairment (SLI), nonspecific language impairment (NLI), and those showing typical language development (TD) responded to target words in sentences that were either grammatical or contained a grammatical error immediately before the target word. The TD participants showed the expected slower response times (RTs) when errors preceded the target word, regardless of error type. The SLI and NLI groups also showed the expected slowing, except when the error type involved the omission of a tense/agreement inflection. This response pattern mirrored an early developmental period of alternating between using and omitting tense/agreement inflections that is characteristic of SLI and NLI. The findings could not be readily attributed to factors such as insensitivity to omissions in general or insensitivity to the particular phonetic forms used to mark tense/agreement. The observed response pattern may represent continued difficulty with tense/agreement morphology that persists in subtle form into adolescence.
Word and name retrieval failures increase with age, and this study investigated how priming impacts young and older adults' ability to produce proper names. The transmission deficit hypothesis predicts facilitation from related prime names, whereas the blocking and inhibition deficit hypotheses predict interference from related names, especially for older adults. On half of our experimental trials, we exposed participants to a prime name that is phonologically- and semantically-related to a target name. Related names facilitated production of targets overall, with older adults' naming ability improved at least as much as young adults'. Results are contrary to predictions of the blocking and inhibitory deficit hypotheses, and suggest that an activation-based model of memory and language better accounts for retrieval and production of well-known names.
Research on prosody has recently become an important focus in various disciplines, including Linguistics, Psychology, and Computer Science. This article reviews recent research advances on two key issues: prosodic phrasing and prosodic prominence. Both aspects of prosody are influenced by linguistic factors such as syntactic constituent structure, semantic relations, phonological rhythm, pragmatic considerations, and also by processing factors such as the length, complexity or predictability of linguistic material. Our review summarizes recent insights into the production and perception of these two components of prosody and their grammatical underpinnings. While this review only covers a subset of a broader set of research topics on prosody in cognitive science, they are representative of a tendency in the field toward a more interdisciplinary approach.
This study tests the hypothesis that three common types of disfluency (fillers, silent pauses, and repeated words) reflect variance in what strategies are available to the production system for responding to difficulty in language production. Participants' speech in a storytelling paradigm was coded for the three disfluency types. Repeats occurred most often when difficult material was already being produced and could be repeated, but fillers and silent pauses occurred most when difficult material was still being planned. Fillers were associated only with conceptual difficulties, consistent with the proposal that they reflect a communicative signal whereas silent pauses and repeats were also related to lexical and phonological difficulties. These differences are discussed in terms of different strategies available to the language production system.
A growing body of evidence suggests that semantic access is obligatory. Several studies have demonstrated that brain activity associated with semantic processing, measured in the N400 component of the event-related brain potential (ERP), is elicited even by meaningless, orthographically illegal strings, suggesting that semantic access is not gated by lexicality. However, the downstream consequences of that activity vary by item type, exemplified by the typical finding that N400 activity is reduced by repetition for words and pronounceable nonwords but not for illegal strings. We propose that this lack of repetition effect for illegal strings is caused not by lack of contact with semantics, but by the unrefined nature of that contact under conditions in which illegal strings can be readily categorised as task-irrelevant. To test this, we collected ERPs from participants performing a modified Lexical Decision Task, in which the presence of orthographically illegal acronyms rendered meaningless illegal strings more difficult lures than normal. Confirming our hypothesis, under these conditions illegal strings elicited robust N400 repetition effects, quantitatively and qualitatively similar to those elicited by words, pseudowords, and acronyms.
Aging affects the ability to retrieve words for production, despite maintainence of lexical knowledge. In this study, we investigate the influence of lexical variables on picture naming accuracy and latency in adults ranging in age from 22 to 86 years. In particular, we explored the influence of phonological neighborhood density, which has been shown to exert competitive effects on word recognition, but to facilitate word production, a finding with implications for models of the lexicon. Naming responses were slower and less accurate for older participants, as expected. Target frequency also played a strong role, with facilitative frequency effects becoming stronger with age. Neighborhood density interacted with age, such that naming was slower for high-density than low-density items, but only for older subjects. Explaining this finding within an interactive activation model suggests that, as we age, the ability of activated neighbors to facilitate target production diminishes, while their activation puts them in competition with the target.
Two partially independent issues are addressed in two auditory rating studies: under what circumstances is a sub-string of a sentence identified as a stand-alone sentence, and under what circumstances do globally ill-formed but 'locally coherent' analyses (Tabor, Galantucci, & Richardson., 2004) emerge? A new type of locally coherent structure is established in Experiment 1, where a that-less complement clause is at least temporarily analyzed as a stand-alone sentence when it corresponds to a prosodic phrase. In Experiment 2, reduced relative clause structures like those in Tabor et al. were investigated. As in Experiment 1, the root sentence (mis-)analyses emerged most frequently when the locally coherent clause corresponded to a prosodic phrase. However, a substantial number of locally coherent analyses emerged even without prosodic help, especially in examples with for-datives (which do not grammatically permit a reduced relative clause structure for some speakers). Overall, the results suggest that prosodic grouping of constituents encourages analysis of a sub-string as a root sentence, and raise the question of whether all local coherence structures involve analysis of an utterance-final sub-string as a root sentence.
Two-year-olds assign appropriate interpretations to verbs presented in two English transitivity alternations, the causal and unspecified-object alternations (Naigles, 1996). Here we explored how they might do so. Causal and unspecified-object verbs are syntactically similar. They can be either transitive or intransitive, but differ in the semantic roles they assign to the subjects of intransitive sentences (undergoer and agent, respectively). To distinguish verbs presented in these two alternations, children must detect this difference in role assignments. We examined distributional features of the input as one possible source of information about this role difference. Experiment 1 showed that in a corpus of child-directed speech, causal and unspecified-object verbs differed in their patterns of intransitive-subject animacy and lexical overlap between nouns in subject and object positions. Experiment 2 tested children's ability to use these two distributional cues to infer the meaning of a novel causal or unspecified-object verb, by separating the presentation of a novel verb's distributional properties from its potential event referents. Children acquired useful combinatorial information about the novel verb simply by listening to its use in sentences, and later retrieved this information to map the verb to an appropriate event.
We investigated the relevance of linguistic and perceptual factors to sign processing by comparing hearing individuals and deaf signers as they performed a handshape monitoring task, a sign-language analogue to the phoneme-monitoring paradigms used in many spoken-language studies. Each subject saw a series of brief video clips, each of which showed either an ASL sign or a phonologically possible but non-lexical "non-sign," and responded when the viewed action was formed with a particular handshape. Stimuli varied with respect to the factors of Lexicality, handshape Markedness (Battison, 1978), and Type, defined according to whether the action is performed with one or two hands and for two-handed stimuli, whether or not the action is symmetrical.Deaf signers performed faster and more accurately than hearing non-signers, and effects related to handshape Markedness and stimulus Type were observed in both groups. However, no effects or interactions related to Lexicality were seen. A further analysis restricted to the deaf group indicated that these results were not dependent upon subjects' age of acquisition of ASL. This work provides new insights into the processes by which the handshape component of sign forms is recognized in a sign language, the role of language experience, and the extent to which these processes may or may not be considered specifically linguistic.
A varied psychological vocabulary now describes the cognitive and social conditions of language production, the ultimate result of which is the mechanical action of vocal musculature in spoken expression. Following the logic of the speech chain, descriptions of production have often exhibited a clear analogy to accounts of perception. This reciprocality is especially evident in explanations that rely on reafference to control production, on articulation to inform perception, and on strict parity between produced and perceived form to provide invariance in the relation between abstract linguistic objects and observed expression. However, a causal account of production and perception cannot derive solely from this hopeful analogy. Despite sharing of abstract linguistic representations, the control functions in production and perception as well as the constraints on their use stand in fundamental disanalogy. This is readily seen in the different adaptive challenges to production - to speak in a single voice - and perception - to resolve familiar linguistic properties in any voice. This acknowledgment sets descriptive and theoretical challenges that break the symmetry of production and perception. As a consequence, this recognition dislodges an old impasse between the psychoacoustic and motoric accounts in the regulation of production and perception.
Parallelism effects refer to the facilitated processing of a target structure when it follows a similar, parallel structure. In coordination, a parallelism-related conjunction triggers the expectation that a second conjunct with the same structure as the first conjunct should occur. It has been proposed that parallelism effects reflect the use of the first structure as a template that guides the processing of the second. In this study, we examined the role of parallelism in real-time anaphora resolution by charting activation patterns in coordinated constructions containing anaphora, Verb-Phrase Ellipsis (VPE) and Noun-Phrase Traces (NP-traces). Specifically, we hypothesised that an expectation of parallelism would incite the parser to assume a structure similar to the first conjunct in the second, anaphora-containing conjunct. The speculation of a similar structure would result in early postulation of covert anaphora. Experiment 1 confirms that following a parallelism-related conjunction, first-conjunct material is activated in the second conjunct. Experiment 2 reveals that an NP-trace in the second conjunct is posited immediately where licensed, which is earlier than previously reported in the literature. In light of our findings, we propose an intricate relation between structural expectations and anaphor resolution.
Animacy is known to play an important role in language processing and production, but debate remains as to how it exerts its effects: 1) through links to syntactic ordering, 2) through inherent differences between animate and inanimate entities in their salience/lexico-semantic accessibility, 3) through links to specific thematic roles. We contrasted these three accounts in two event related potential (ERP) experiments examining the processing of direct object arguments in simple English sentences. In Experiment 1, we found a larger N400 to animate than inanimate direct object arguments assigned the Patient role, ruling out the second account. In Experiment 2 we found no difference in the N400 evoked by animate direct object arguments assigned the Patient role (prototypically inanimate) and those assigned the Experiencer role (prototypically animate), ruling out the third account. We therefore suggest that animacy may impact processing through a direct link to syntactic linear ordering, at least on post-verbal arguments in English. We also examined processing on direct object arguments that violated the animacy-based selection restriction constraints of their preceding verbs. These violations evoked a robust P600, which was not modulated by thematic role assignment or reversibility, suggesting that the so-called semantic P600 is driven by overall propositional impossibility, rather than thematic role reanalysis.
The noun plural system in Modern Standard Arabic lies at a nexus of critical issues in morphological learnability. The suffixing "sound" plural competes with as many as 31 non-concatenative "broken" plural patterns. Our computational analysis of singular-plural pairs in the Corpus of Contemporary Arabic explores what types of linguistic information are statistically relevant to morphological generalisation for this highly complex system. We show that an analogical approach with the generalised context model is highly successful in predicting the plural form for any given singular form. This model proves to be robust to variation, as evidenced by its stability across 10 rounds of cross-validation. The predictive power is carried almost entirely by the CV template, a representation which specifies a segment's status as a consonant or vowel only, providing further support for the abstraction of prosodic templates in the Arabic morphological system as proposed by McCarthy and Prince.
The Arabic language is acquired by its native speakers both as a regional spoken Arabic dialect, acquired in early childhood as a first language, and as the more formal variety known as Modern Standard Arabic (MSA), typically acquired later in childhood. These varieties of Arabic show a range of linguistic similarities and differences. Since previous psycholinguistic research in Arabic has primarily used MSA, it remains to be established whether the same cognitive properties hold for the dialects. Here we focus on the morphological level, and ask whether roots and word patterns play similar or different roles in MSA and in the regional dialect known as Southern Tunisian Arabic (STA). In two intra-modal auditory-auditory priming experiments, we found similar results with strong priming effects for roots and patterns in both varieties. Despite differences in the timing and nature of the acquisition of MSA and STA, root and word pattern priming was clearly distinguishable from form-based and semantic-based priming in both varieties. The implication of these results for theories of Arabic diglossia and theories of morphological processing are discussed.
Speech production has been studied within a number of traditions including linguistics, psycholinguistics, motor control, neuropsychology, and neuroscience. These traditions have had limited interaction, ostensibly because they target different levels of speech production or different dimensions such as representation, processing, or implementation. However, closer examination of reveals a substantial convergence of ideas across the traditions and recent proposals have suggested that an integrated approach may help move the field forward. The present article reviews one such attempt at integration, the state feedback control model and its descendent, the hierarchical state feedback control model. Also considered is how phoneme-level representations might fit in the context of the model.
We present an overview of recent research conducted in the field of language production based on papers presented at the first edition of the International Workshop on Language Production (Marseille, France, September 2004). This article comprises two main parts. In the first part, consisting of three sections, we review the articles that are included in this Special Issue. These three sections deal with three different topics of general interest for models of language production: (A) the general organisational principles of the language production system, (B) several aspects of the lexical selection process and (C) the representations and processes used during syntactic encoding. In the second part, we discuss future directions for research in the field of language production, given the considerable developments that have occurred in recent years.
Providing evidence for the universal tendencies of patterns in the world's languages can be difficult, as it is impossible to sample all possible languages, and linguistic samples are subject to interpretation. However, experimental techniques such as artificial grammar learning paradigms make it possible to uncover the psychological reality of claimed universal tendencies. This paper addresses learning of phonological patterns (systematic tendencies in the sounds in language). Specifically, I explore the role of phonetic grounding in learning round harmony, a phonological process in which words must contain either all round vowels ([o, u]) or all unround vowels ([i, e]). The phonetic precursors to round harmony are such that mid vowels ([o, e]), which receive the greatest perceptual benefit from harmony, are most likely to trigger harmony. High vowels ([i, u]), however, are cross-linguistically less likely to trigger round harmony. Adult participants were exposed to a miniature language that contained a round harmony pattern in which the harmony source triggers were either high vowels ([i, u]) (poor harmony source triggers) or mid vowels ([o, e]) (ideal harmony source triggers). Only participants who were exposed to the ideal mid vowel harmony source triggers were successfully able to generalize the harmony pattern to novel instances, suggesting that perception and phonetic naturalness play a role in learning.
This study employs a naming task to examine the role of the syllable in speech production, focusing on a lesser-studied aspect of syllabic processing, the interaction of subsyllabic patterns (i.e. syllable phonotactics) and higher-level prosody, in this case, stress assignment in Spanish. Specifically, we examine a controversial debate in Spanish regarding the interaction of syllable weight and stress placement, showing that traditional representations of weight fail to predict the differential modulation of stress placement by rising versus falling diphthongs in Spanish nonce forms. Our results also suggest that the internal structure of the syllable plays a larger role than is assumed in the processing literature in that it modulates higher-level processes such as stress encoding. Our results thus inform the debate regarding syllable weight in Spanish and linguistic theorizing more broadly, as well as expand our understanding of the importance of the syllable, and more specifically its internal structure, in modulating word processing.
In typical interactions, speakers frequently produce utterances that appear to reflect beliefs about the common ground shared with particular addressees. Horton and Gerrig (2005a) proposed that one important basis for audience design is the manner in which conversational partners serve as cues for the automatic retrieval of associated information from memory. This paper reports the results of two experiments demonstrating the influence of partner-specific memory associations on language production. Following an initial task designed to establish associations between specific words (Experiment 1) or object categories (Experiment 2) and each of two partners, participants named a series of pictures in the context of the same two individuals. Naming latencies were shortest for responses associated with the current partner, and were not significantly correlated with explicit recall of partner-item associations. Such partner-driven memory retrieval may constrain the information accessible to speakers as they produce utterances for particular addressees.
In the target article, Hickok asserts that auditory and somatosensory feedback control operate at different levels in the speech production hierarchy, with somatosensory feedback control involved in lower-level phonemic processing and auditory feedback control involved in higher-level syllabic processing. This assertion is based in part on a characterization of phonemes as timeless feature bundles. In this commentary I argue that the linguistic conception of phonemes as timeless feature bundles is insufficient for characterizing the production of phonemes by the motor system, and I review evidence that auditory feedback control is used even at the sub-phonemic level of speech production, contrary to its role in the hierarchical state feedback control model.
Autism involves primary impairments in both language and communication, yet in recent years the main focus of research has been on the communicative deficits that define the population. The study reported in this paper investigated language functioning in a group of 89 children diagnosed with autism using the ADI-R, and meeting DSM-IV criteria. The children, who were between 4- and 14- years-old were administered a battery of standardized language tests tapping phonological, lexical, and higher-order language abilities. The main findings were that among the children with autism there was significant heterogeneity in their language skills, although across all the children, articulation skills were spared. Different subgroups of children with autism were identified on the basis on their performance on the language measures. Some children with autism have normal language skills; for other children, their language skills are significantly below age expectations. The profile of performance across the standardized measures for the language-impaired children with autism was similar to the profile that defines the disorder specific language impairment (or SLI). The implications of this language impaired subgroup in autism for understanding the genetics and definition of both autism and SLI are discussed.
This study investigates the contribution of grammatical gender to integrating depicted nouns into sentences during on-line comprehension, and whether semantic congruity and gender agreement interact using two tasks: naming and semantic judgement of pictures. Native Spanish speakers comprehended spoken Spanish sentences with an embedded line drawing, which replaced a noun that either made sense or not with the preceding sentence context and either matched or mismatched the gender of the preceding article. In Experiment 1a (picture naming) slower naming times were found for gender mismatching pictures than matches, as well as for semantically incongruous pictures than congruous ones. In addition, the effects of gender agreement and semantic congruity interacted; specifically, pictures that were both semantically incongruous and gender mismatching were named slowest, but not as slow as if adding independent delays from both violations. Compared with a neutral baseline, with pictures embedded in simple command sentences like "Now please say ____", both facilitative and inhibitory effects were observed. Experiment 1b replicated these results with low-cloze gender-neutral sentences, more similar in structure and processing demands to the experimental sentences. In Experiment 2, participants judged a picture's semantic fit within a sentence by button-press; gender agreement and semantic congruity again interacted, with gender agreement having an effect on congruous but not incongruous pictures. Two distinct effects of gender are hypothesised: a "global" predictive effect (observed with and without overt noun production), and a "local" inhibitory effect (observed only with production of gender-discordant nouns).
The effects of knowledge of sign language on co-speech gesture were investigated by comparing the spontaneous gestures of bimodal bilinguals (native users of American Sign Language and English; n = 13) and non-signing native English speakers (n = 12). Each participant viewed and re-told the Canary Row cartoon to a non-signer whom they did not know. Nine of the thirteen bimodal bilinguals produced at least one ASL sign, which we hypothesise resulted from a failure to inhibit ASL. Compared with non-signers, bimodal bilinguals produced more iconic gestures, fewer beat gestures, and more gestures from a character viewpoint. The gestures of bimodal bilinguals also exhibited a greater variety of handshape types and more frequent use of unmarked handshapes. We hypothesise that these semantic and form differences arise from an interaction between the ASL language production system and the co-speech gesture system.
BOLD signal was measured in sixteen participants who made timed font change detection judgments in visually presented sentences that varied in syntactic structure and the order of animate and inanimate nouns. Behavioral data indicated that sentences were processed to the level of syntactic structure. BOLD signal increased in visual association areas bilaterally and left supramarginal gyrus in the contrast of sentences with object- and subject-extracted relative clauses without font changes in which the animacy order of the nouns biased against the syntactically determined meaning of the sentence. This result differs from the findings in a non-word detection task (Caplan et al, 2008a), in which the same contrast led to increased BOLD signal in the left inferior frontal gyrus. The difference in areas of activation indicates that the sentences were processed differently in the two tasks. These differences were further explored in an eye tracking study using the materials in the two tasks. Issues pertaining to how parsing and interpretive operations are affected by a task that is being performed, and how this might affect BOLD signal correlates of syntactic contrasts, are discussed.
Investigations of the impact of morphemic boundaries on transposed-letter priming effects have yielded conflicting results. Five masked priming lexical decision experiments were conducted to examine the interaction of letter transpositions and morphemic boundaries with English suffixed derivations. Experiments 1-3 found that responses to monomorphemic target words (e.g., SPEAK) were facilitated to the same extent by morphologically related primes containing letter transpositions that did (SPEAEKR) or did not (SPEKAER) cross a morphemic boundary. This pattern was also observed in Experiments 4 and 5, in which the targets (e.g. SPEAKER) were the base forms of the transposed-letter primes. Thus, in these experiments the influence of the morphological structure of a transposed-letter prime did not depend on whether the letter transposition crossed a morphological boundary.
We used event-related potentials (ERPs) to investigate the time course and distribution of brain activity while adults performed (a) a sequential learning task involving complex structured sequences, and (b) a language processing task. The same positive ERP deflection, the P600 effect, typically linked to difficult or ungrammatical syntactic processing, was found for structural incongruencies in both sequential learning as well as natural language, and with similar topographical distributions. Additionally, a left anterior negativity (LAN) was observed for language but not for sequential learning. These results are interpreted as an indication that the P600 provides an index of violations and the cost of integration of expectations for upcoming material when processing complex sequential structure. We conclude that the same neural mechanisms may be recruited for both syntactic processing of linguistic stimuli and sequential learning of structured sequence patterns more generally.
Investigations of how we produce and perceive prosodic patterns are not only interesting in their own right but can inform fundamental questions in language research. We here argue that functional magnetic resonance imaging (fMRI) in general - and the functional localization approach in particular (e.g., Kanwisher et al., 1997; Saxe et al., 2006; Fedorenko et al., 2010; Nieto-Castañon & Fedorenko, 2012) - has the potential to help address open research questions in prosody research and at the intersection of prosody and other domains. Critically, this approach can go beyond questions like "where in the brain does mental process x produce activation" and toward questions that probe the nature of the representations and computations that subserve different mental abilities. We describe one way to functionally define regions sensitive to sentence-level prosody in individual subjects. This or similar "localizer" contrasts can be used in future studies to test hypotheses about the precise contributions of prosody-sensitive brain regions to prosodic processing and cognition more broadly.
Previous studies haves shown that under masked priming conditions, CORNER primes CORN as strongly as TEACHER primes TEACH and more strongly than BROTHEL primes BROTH. This result has been taken as evidence of a purely structural level of representation at which words are decomposed into morphological constituents in a manner that is independent of semantics. The research reported here investigated the influence of semantic transparency on long-term morphological priming. Two experiments demonstrated that while lexical decisions were facilitated by semantically transparent primes like TEACHER, semantically opaque words like CORNER had no effect. Although differences in the nonword foils used in each experiment gave rise to somewhat different patterns of results, this difference in the effects of transparent and opaque primes was found in both experiments. The implications of this finding for accounts of morphological effects on visual word identification are discussed.
Hungarian is a language with morphological case marking and relatively free word order. These typological characteristics make it a good ground for testing the crosslinguistic validity of theories on processing sentences with relative clauses. Our study focussed on effects of structural factors and processing capacity. We tested 43 typically developing children in two age groups (ages of 4;11-7;2 and 8;2-11;4) in an act-out task. Differences in comprehension difficulty between different word order patterns and different head function relations were observed independently of each other. The structural properties causing difficulties in comprehension were interruption of main clauses, greater distance between the verb and its arguments, accusative case of relative pronouns, and SO head function relations. Importantly, analyses of associations between working memory and sentence comprehension revealed that structural factors made processing difficult by burdening components of working memory. These results support processing accounts of sentence comprehension in a language typologically different from English.
Speakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children's sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.
At the one-word stage children use gesture to supplement their speech ('eat'+point at cookie), and the onset of such supplementary gesture-speech combinations predicts the onset of two-word speech ('eat cookie'). Gesture thus signals a child's readiness to produce two-word constructions. The question we ask here is what happens when the child begins to flesh out these early skeletal two-word constructions with additional arguments. One possibility is that gesture continues to be a forerunner of linguistic change as children flesh out their skeletal constructions by adding arguments. Alternatively, after serving as an opening wedge into language, gesture could cease its role as a forerunner of linguistic change. Our analysis of 40 children--from 14 to 34 months--showed that children relied on gesture to produce the first instance of a variety of constructions. However, once each construction was established in their repertoire, the children did not use gesture to flesh out the construction. Gesture thus acts as a harbinger of linguistic steps only when those steps involve new constructions, not when the steps merely flesh out existing constructions.
In previous studies in English examining the influence of phonological neighbourhood density in spoken word production, words with many similar sounding words, or a dense neighbourhood, were produced more quickly and accurately than words with few similar sounding words, or a sparse neighbourhood. The influence of phonological neighbourhood density on the process of spoken word production in Spanish was examined with a picture-naming task. The results showed that pictures with Spanish names from sparse neighbourhoods were named more quickly than pictures with Spanish names from dense neighbourhoods. The present pattern of results is the opposite of what has been previously found in speech production in English. We hypothesise that differences in the morphology of Spanish and English and/or the location in the word where phonological neighbours tend to occur may contribute to the processing differences observed in the two languages.
Mental representations formed from words or phrases may vary considerably in their feature-based complexity. Modern theories of retrieval in sentence comprehension do not indicate how this variation and the role of encoding processes should influence memory performance. Here, memory retrieval in language comprehension is shown to be influenced by a target's representational complexity in terms of syntactic and semantic features. Three self-paced reading experiments provide evidence that reading times at retrieval sites (but not earlier) decrease when more complex phrases occur as filler-phrases in filler-gap dependencies. The data also show that complexity-based effects are not dependent on string length, syntactic differences, or the amount of processing the stimuli elicit. Activation boosting and reduced similarity-based interference are implicated as likely sources of these complexity-based effects.
Recent research on the brain mechanisms underlying language processing has implicated the left anterior temporal lobe (LATL) as a central region for the composition of simple phrases. Because these studies typically present their critical stimuli without contextual information, the sensitivity of LATL responses to contextual factors is unknown. In this magnetoencephalography (MEG) study, we employed a simple question-answer paradigm to manipulate whether a prenominal adjective or determiner is interpreted restrictively, i.e., as limiting the set of entities under discussion. Our results show that the LATL is sensitive to restriction, with restrictive composition eliciting higher responses than non-restrictive composition. However, this effect was only observed when the restricting element was a determiner, adjectival stimuli showing the opposite pattern, which we hypothesise to be driven by the special pragmatic properties of non-restrictive adjectives. Overall, our results demonstrate a robust sensitivity of the LATL to high level contextual and potentially also pragmatic factors.
Under what circumstances do people agree that a kind-referring generic sentence (e.g., "Swans are beautiful") is true? We hypothesized that theory-based considerations are sufficient, independently of prevalence/frequency information, to lead to acceptance of a generic statement. To provide evidence for this general point, we focused on demonstrating the impact of a specific theory-based, essentialist expectation-that the physical features characteristic of a biological kind emerge as a natural product of development-on participants' reasoning about generics. Across 3 studies, adult participants (N = 99) confirmed our hypothesis, preferring to map generic sentences (e.g., "Dontrets have long tails") onto novel categories for which the key feature (e.g., long tails) was absent in all the young but present in all the adults rather than onto novel categories for which the key feature was at least as prevalent but present in some of the young and in some of the adults. Control conditions using "some"- and "most"-quantified sentences demonstrated that this mapping is specific to generic meaning. These results suggest that generic meaning does not reduce to quantification and is sensitive to theory-based expectations.
Two visual-world experiments investigated whether and how quickly discourse-based expectations about the prosodic realization of spoken words modulate interpretation of acoustic-prosodic cues. Experiment 1 replicated effects of segmental lengthening on activation of onset-embedded words (e.g. pumpkin) using resynthetic manipulation of duration and fundamental frequency (F0). In Experiment 2, the same materials were preceded by instructions establishing information-structural differences between competing lexical alternatives (i.e. repeated vs. newly-assigned thematic roles) in critical instructions. Eye-movements generated upon hearing the critical target word revealed a significant interaction between information structure and target-word realization: Segmental lengthening and pitch excursion elicited more fixations to the onset-embedded competitor when the target word remained in the same thematic role, but not when its thematic role changed. These results suggest that information structure modulates the interpretation of acoustic-prosodic cues by influencing expectations about fine-grained acoustic-phonetic properties of the unfolding utterance.
We investigated the functional organization of neural systems supporting language production when the primary language articulators are also used for meaningful, but non-linguistic, expression such as pantomime. Fourteen hearing non-signers and 10 deaf native users of American Sign Language (ASL) participated in an H(2) (15)O-PET study in which they generated action pantomimes or ASL verbs in response to pictures of tools and manipulable objects. For pantomime generation, participants were instructed to "show how you would use the object." For verb generation, signers were asked to "generate a verb related to the object." The objects for this condition were selected to elicit handling verbs that resemble pantomime (e.g., TO-HAMMER (hand configuration and movement mimic the act of hammering) and non-handling verbs that do not (e.g., POUR-SYRUP, produced with a "Y" handshape). For the baseline task, participants viewed pictures of manipulable objects and an occasional non-manipulable object and decided whether the objects could be handled, gesturing "yes" (thumbs up) or "no" (hand wave). Relative to baseline, generation of ASL verbs engaged left inferior frontal cortex, but when non-signers produced pantomimes for the same objects, no frontal activation was observed. Both groups recruited left parietal cortex during pantomime production. However, for deaf signers the activation was more extensive and bilateral, which may reflect a more complex and integrated neural representation of hand actions. We conclude that the production of pantomime versus ASL verbs (even those that resemble pantomime) engage partially segregated neural systems that support praxic versus linguistic functions.