Article

Linguistics in Cognitive Science: The State of the Art

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The special issue of The Linguistic Review on "The Role of Linguistics in Cognitive Science" presents a variety of viewpoints that complement or contrast with the perspective offered in Foundations of Language (Jackendoff 2002a). The present article is a response to the special issue. It discusses what it would mean to integrate linguistics into cognitive science, then shows how the parallel architecture proposed in Foundations seeks to accomplish this goal by altering certain fundamental assumptions of generative grammar. It defends this approach against criticisms both from mainstream generative grammar and from a variety of broader attacks on the generative enterprise, and it reflects on the nature of Universal Grammar. It then shows how the parallel architecture applies directly to processing and defends this construal against various critiques. Finally, it contrasts views in the special issue with that of Foundations with respect to what is unique about language among cognitive capacities, and it conjectures about the course of the evolution of the language faculty.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... Jackendoff (2010) argues that there is a dependency between theories of language and theories of language evolution: "Your theory of language evolution depends on your theory of language". Building on earlier work (Jackendoff 1999(Jackendoff , 2002(Jackendoff , 2007a(Jackendoff , 2007bCulicover & Jackendoff 2005), he describes two types of architecture for the human language faculty, syntactocentric architectures (e.g., most work within the Minimalist Earlier drafts of this article were presented at the University of Virginia, University of Chicago, and University of Illinois at Chicago. I would like to thank the audiences at those events. ...
... (Jackendoff 2011) Jackendoff (2011) motivates the parallel architecture by arguing that it better integrates with what is known about brain computation and other aspects of human cognition than other conceptions of the language faculty, particularly syntactocentric architectures such as that assumed by much work within the Minimalist Program (see section 3). 7 Further, Jackendoff claims that the parallel 7 In their review of Jackendoff (2002), Phillips & Lau (2004) discuss limitations of Jackendoff's arguments on behalf of the parallel architecture and, in particular, dispute his claim that the architecture makes available a much more plausible approach to the evolution of language than syntactocentric approaches (Jackendoff 1999(Jackendoff , 2002(Jackendoff , 2007a(Jackendoff , 2007b(Jackendoff , 2010(Jackendoff , 2011. ...
... In various publications, Jackendoff (1999Jackendoff ( , 2002Jackendoff ( , 2007aJackendoff ( , 2007bJackendoff ( , 2011 discusses how language might have evolved gradually. Figure 3 is from Jackendoff (1999; see also Jackendoff 2002: 238 and2007b: 393) and is a hypothesis about how the entire human language faculty (including, but not limited, to syntax) might have evolved, given parallel architecture assumptions. ...
Article
Full-text available
Contemporary work on the evolution of syntax can be roughly divided into two perspectives. The incremental view claims that the evolution of syntax involved multiple stages between the non-combinatorial communication system of our last common ancestor with chimpanzees and modern human syntax. The saltational view claims that syntax was the result of a single evolutionary development. What is the relationship between syntactic theory and these two perspectives? Jackendoff (2010) argues that “[y]our theory of language evolution depends on your theory of language”. For example, he claims that most work within the Minimalist Program (Chomsky 1995) is forced to the saltational view. In this paper it is argued that there is not a dependency relation between theories of syntax and theories of syntactic evolution. The parallel architecture (Jackendoff 2002) is consistent with a saltational theory of syntactic evolution. The architecture assumed in most minimalist work is compatible with an incremental theory.
... Jackendoff recognizes the potential importance of grammaticalization, but unfortunately has not familiarized himself with the extensive literature on this topic, where great strides have been made in the last twenty years. Jackendoff (2007:xxx) says: Although it is undeniable that grammaticalization through historical change is an important source of closed-class morphology, I find a lot left unexplained. The questions he asks about grammaticalization are indeed addressed in the literature and much empirical evidence has been brought to bear on their answers. ...
... Consider now how the questions Jackendoff (2007:xxx) asks have been addressed. What is it about the semantics of go+purpose that lends itself to being bleached out into a future? ...
... Despite over twenty years of research on connectionist modeling, no connectionist model comes close. (Jackendoff 2007:xxx) While we would not wish to claim connectionist approaches have succeeded fully in addressing the processing of complex sentences, we would also point out that such sentences pose challenges for other approaches, and our first sentence is a case in point. First of all, there is no consensus among linguists on how this sentence should be represented. ...
Article
Full-text available
Jackendoff and other linguists have acknowledged that there is gradience in language but have tended to treat gradient phenomena as separate from the core of language, which is viewed as fully productive and compositional. This perspective suffuses Jackendoff's (2007) response to our position paper (Bybee and McClelland 2005). We argue that gradience is an inherent feature of language representation, processing, and learning, and that natural language exhibits all degrees of gradience. Contrary to Jackendoff's assertions, we do not reject the possibility of innate constraints on language, feeling only that the jury is out on the nature and specificity of such constraints. We address a number of questions Jackendoff raises about the process of grammaticalization, drawing on extant literature of which he appears to be unaware. We also address Jackendoff's views on the prospect that connectionist models can address core aspects of language processing and representation. Here again extant literature of which Jackendoff seems unaware addresses all four of his general objections to connectionist approaches.
... Simpler Syntax within the Parallel Architecture framework (Jackendoff, 2002(Jackendoff, , 2007a(Jackendoff, , 2007b allocates main roles to the syntax-semantics interface, semantics, and the lexicon while admitting only the minimally necessary syntactic structure in explaining linguistic phenomena (Culicover & Jackendoff, 2005, p. 5;cf. Chomsky, 1995cf. ...
... Korean learners showed no evidence of syntactic gap processing or verb-driven integration in reading sentences with long scrambling but they nevertheless comprehended those sentences accurately 90.28% of the time. I propose an analysis of Korean (and Chinese) learners' processing of sentences with long scrambling that draws on Simpler Syntax (Culicover & Jackendoff, 2005) within the Parallel Architecture framework (Jackendoff, 2002(Jackendoff, , 2007a(Jackendoff, , 2007b. The Simpler Syntax approach enables a finely grained characterization of L2 processing that is neither fully structure-based nor overly driven by information on verb argument structure and pragmatics, with variation along both axes. ...
Chapter
This volume is the first dedicated to the growing field of theory and research on second language processing and parsing. The fourteen papers in this volume offer cutting-edge research using a number of different languages (e.g., Arabic, Spanish, Japanese, French, German, English) and structures (e.g., relative clauses, wh-gaps, gender, number) to examine various issues in second language processing: first language influence, whether or not non-natives can achieve native-like processing, the roles of context and prosody, the effects of working memory, and others. The researchers include both established scholars and newer voices, all offering important insights into the factors that affect processing and parsing in a second language.
... Jackendoff also, among others, does not distinguish grammaticality and acceptability. In his model of grammar, well-formedness is established through a constraint-based formalism while constraints may be violable, so that structural complexity (and less than perfect grammaticality) can arise through constraint conflict (Jackendoff 2007). This theoretical approach rests on the assumption that grammar contains multiple independent sources of generativity, among which phonology, syntax, and semantics are the most prominent ones. ...
... I cannot go into detail here, but for more information on gradience in language seeKeller (2000),Jackendoff (2002),Newmayer (2003),Sorace and Keller (2005),Jackendoff (2007),McClelland and Bybee (2007),Wasow (2009). ...
... Jackendoff also, among others, does not distinguish grammaticality and acceptability. In his model of grammar, well-formedness is established through a constraint-based formalism while constraints may be violable, so that structural complexity (and less than perfect grammaticality) can arise through constraint conflict (Jackendoff 2007). This theoretical approach rests on the assumption that grammar contains multiple independent sources of generativity, among which phonology, syntax, and semantics are the most prominent ones. ...
... I cannot go into detail here, but for more information on gradience in language seeKeller (2000),Jackendoff (2002),Newmayer (2003),Sorace and Keller (2005),Jackendoff (2007),McClelland and Bybee (2007),Wasow (2009). ...
Conference Paper
Full-text available
Halliday’s (1967) introduction of the term information structure virtually initiated the research on semantic relationships between sentence structure elements based on the discursive position of an utterance. His work has greatly contributed to the defining of information structure as a formal element in the pragmatic structuring of propositions in the discourse (Lambrecht 1994) in terms of the relationships between linguistic tools and circumstances on the one hand and/or speaker’s intentions on the other. Given that the analysis of this relationship is almost entirely novel to Slavic linguistics, I intend to present some new data on the clitic and clitic cluster positioning in the linear order of Croatian sentences. I will offer an account of the basic features of the multi-leveled approach to the analysis of examples containing clitics and clitic clusters, which will allow for the establishment of criteria to classify sentences as unmarked or marked from the point of view of information structure. The premise underlying the work reported here is that information structure is part of the conceptual structure. Within the Theory of Parallel Architecture (Jackendoff 2002), conceptual structure is the component of grammar that enters into a systematic relationship with the units of the phonological and syntactic component, determining the informational potential of the sentence. The empirical part presents preliminary research results on the relationship between the placement of clitics and clitic clusters and the information structure in Croatian, a so-called “free word-order” language. The goal of this study is to set the ground for establishing the constraints to the combinatorial properties of clitic and clitic cluster units and the hierarchy among them.
... Since grammaticality judgments are a side effect of being able to use language for communication (Jackendoff, 2007a), judgments about ambiguous and ungrammatical sentences can be used to query the language faculty in the same way that ambiguous figures and visual illusions can be used to query the visual system (Jackendoff, 2007b, p. 7). Linguists attempt to find places where the linguistic processing can be perturbed with ungrammatical sentences, demonstrating the boundaries of a proper functioning linguistic system by showing where the system breaks down. ...
... In such cases, ungrammatical sentences may be misperceived as grammatical. Moreover, linguists can sometimes fail to attend to the control sentences that would help them to explain where linguistic processing has failed, and so fail to explain why a particular sentence type is ambiguous or ungrammatical (Jackendoff, 2007b, p. 7). But, since the target of grammaticality judgments is the breakdown of linguistic processing, linguistic competence can serve as a background for formulating ungrammatical and ambiguous sentences that target the structure of the language faculty and its interfaces with non-linguistic systems. ...
Article
Full-text available
Thought experimental methods play a central role in empirical moral psychology. Against the increasingly common interpretation of recent experimental data, I argue that such methods cannot demonstrate that moral intuitions are produced by reflexive computations that are implicit, fast, and largely automatic. I demonstrate, in contrast, that evaluating thought experiments occurs at a near-glacial pace relative to the speed at which reflexive information processing occurs in a human brain. So, these methods allow for more reflective and deliberative processing than has commonly been assumed. However, these methods may still provide insight into some human strategies for navigating unfamiliar moral dilemmas.
... However, not all possible word forms are stored in the LTM, but speakers construct words online using productive and semi-productive rules. Productive rules are used, for example in regular word formations, such as adding -ly to adjectives to form adverbs (Jackendoff, 2007). In irregular cases, complete word forms are stored in the lexicon, and the application of productive rules is blocked [e.g., good→ goodly [blocked]→ well]. ...
Article
Full-text available
Various linguistic models have been developed to systematize language processes and provide a structured framework for understanding the complex network of language production and reception. However, these models have often been developed in isolation fromneurolinguistic research, which continues to provide new insights into the mental processes involved in language production and comprehension. Conversely, neurolinguists often neglect the potential benefits of incorporating contemporary linguistic models into their research, although these models could help interpret specific findings and make complex concepts more accessible to readers. This paper evaluates the utility of Jackendo􀀀’s Parallel Architecture as a generic framework for explaining language acquisition. It also explores the potential for incorporating neurolinguistic findings by mapping its components onto specific neural structures, functions, and processes within the brain. To this end, we reviewed findings from a range of neurolinguistic studies on language acquisition and tested how their results could be represented using the Parallel Architecture. Our results indicate that the framework is generally well-suited to illustratemany language processes and to explain how language systems are built. However, to increase its explanatory power, it would be beneficial to add other linguistic and non-linguistic structures, or to signal that there is the option of adding such structures (e.g., prosody or pragmatics) for explaining the processes of initiating language acquisition or non-typical language acquisition. It is also possible to focus on fewer structures to show very specific interactions or zoom in on chosen structures and substructures to outline processes in more detail. Since the Parallel Architecture is a framework of linguistic structures for modeling language processes rather than a model of specific linguistic processes per se, it is open to new connections and elements, and therefore open to adaptations and extensions as indicated by new findings in neuro- or psycholinguistics.
... Excess entropy is a function only of the probability distribution on forms, seen as one-dimensional sequences of symbols unfolding in time. This independence from representational assumptions is an advantage, because there is as yet no consensus about the basic nature of the mental representations underlying human language [88,89]. Our results support the idea that key properties of language emerge from generic constraints on sequential information processing [18,[90][91][92]. ...
Preprint
Full-text available
Human language is a unique form of communication in the natural world, distinguished by its structured nature. Most fundamentally, it is systematic, meaning that signals can be broken down into component parts that are individually meaningful -- roughly, words -- which are combined in a regular way to form sentences. Furthermore, the way in which these parts are combined maintains a kind of locality: words are usually concatenated together, and they form contiguous phrases, keeping related parts of sentences close to each other. We address the challenge of understanding how these basic properties of language arise from broader principles of efficient communication under information processing constraints. Here we show that natural-language-like systematicity arises from minimization of excess entropy, a measure of statistical complexity that represents the minimum amount of information necessary for predicting the future of a sequence based on its past. In simulations, we show that codes that minimize excess entropy factorize their source distributions into approximately independent components, and then express those components systematically and locally. Next, in a series of massively cross-linguistic corpus studies, we show that human languages are structured to have low excess entropy at the level of phonology, morphology, syntax, and semantics. Our result suggests that human language performs a sequential generalization of Independent Components Analysis on the statistical distribution over meanings that need to be expressed. It establishes a link between the statistical and algebraic structure of human language, and reinforces the idea that the structure of human language may have evolved to minimize cognitive load while maximizing communicative expressiveness.
... Según la investigación, tales preposiciones pueden ser consideradas como ítems polisémicos, ya que cada una de estas preposiciones, individualmente, puede asignar diferentes significados a su objeto de referencia, y estos significados pueden vincularse cognitivamente a través de una representación espacial conceptual de la trayectoria. En su trabajo, ha propuesto que, a través de la corporeización, el hablante es capaz de extender cognitivamente el significado de un ítem léxico a otros significados conceptuales; es decir, el uso espacial básico de una preposición puede extenderse a usos metafóricos, cuya interpretación, de carácter espacial, puede rescatarse a través de los campos semánticos propuestos por Jackendoff (1983Jackendoff ( , 1992. Los campos semánticos que fueron abordados en esta investigación son temporal, posesional, identificativo, circunstancial y existencial. ...
Article
Full-text available
El fenómeno de la polisemia es una preocupación constante de los estudios cognitivos, ya que esta ciencia entiende que los significados de la palabra son típicamente polisémicos y se pueden representar mediante categorías conceptuales que se estructuran a partir de un prototipo central. Por tanto, la lingüística cognitiva explica la polisemia de las palabras a partir de los mecanismos cognitivos que la motivan. En este artículo, presentamos las maneras en que la polisemia se inserta en el paradigma de la lingüística cognitiva y los avances que esta corriente lingüística ha presentado para mostrar que la polisemia es una condición natural del lenguaje que está anclada en la experiencia corporal y social. En este trabajo, reafirmamos nuestra propuesta de investigar la polisemia como un fenómeno cognitivo en que se basa el procesamiento del lenguaje.
... Expectation in word processing It is widely accepted that social information and linguistic knowledge are retained in memory alongside one another (Foulkes, 2010). A prominent model used to account for this is exemplar theory (see Pierrehumbert, 2001;Bybee, 2002;Jackendoff, 2007). Exemplar theories posit that individuals compare novel stimuli to similar instances of those stimuli that they have previously encountered. ...
Thesis
This thesis is about linguistic variation in swearing and its consequences for how speakers are socially evaluated. Abundant research has established that, beyond its perception as rude or impolite, swearing is hugely socially meaningful in a variety of ways (Stapleton, 2010; Beers Fägersten, 2012). Swearing has been shown to index solidarity (Daly et al., 2004), intimacy (Stapleton, 2003), differing forms of masculinity (De Klerk, 1997) and femininity (S. E. Hughes, 1992), honesty (Feldman et al., 2017), believability (Rassin & Heijden, 2005) and lack of intelligence (DeFrank & Kahlbaugh, 2019), among other traits. The activation of these social meanings also depends on language-external factors such as speaker gender (Howell & Giuliano, 2011), ethnicity (Jacobi, 2014) and social status (T. Jay & Janschewitz, 2008). What has not been established is whether this also depends on language-internal factors such as pronunciation, word formation or sentence structure. This thesis investigates the effect of variation from three different domains of language - phonetics, morphology and semantics/pragmatics - on social evaluation of a speaker. To do so, the thesis takes an experimental approach using the variationist sociolinguistic framework. For variation in each domain, two experiments were used to test for different levels of awareness, following Squires’s (2016) approach for grammatical variation (see also Schmidt, 1990). One experiment tested whether people perceived the variation, while a second tested whether people noticed the variation in the process of social evaluation; the concepts of perceiving and noticing roughly map to the Labovian concepts of the sociolinguistic indicator and marker respectively (Labov, 1972). At the level of phonetics, variation in the realisation of variable (ING) in swearwords (e.g., fucking vs fuckin) was first tested using a variant categorization task, revealing that listeners have an implicit bias towards the velar [IN] variant when hearing swearwords, compared to neutral words and non-words. An auditory matched-guise task then revealed that this same bias affects how listeners extract social information from (ING) tokens attached to swearwords in relation to social meanings typically associated with the variable (Schleef et al., 2017). This result suggests that, rather than pronunciation affecting how swearwords are socially evaluated, swearwords can affect how other phonetic sources of social meaning are evaluated.
... Yet the CAA is not merely interpretative (a characteristic feature of Chomskyan and most formal approaches) but in line with Conceptual semantics (cf.Jackendoff 2007Jackendoff , 2011 regards semantics as a bi-directional interface between linguistic and conceptual structures, and furthermore distinguishes semantic interface representations and conceptual structure proper (so-called "Two-level semantics", cf. Lang/Maienborn 2011). ...
Article
Full-text available
Directionals like from New York, through the tunnel, into the dark represent a major class of spatial expressions, typically associated with locomotion (verbs). Semantically, many of their aspects are notoriously difficult to characterize, among them their characteristics (classification, typology), their relation to locatives (e. g. in New York, in the tunnel, in the dark), and their composition with verbs (especially in non-locomotion contexts). Cognitively, there is a big theoretical gap to fill between aspects of low-level motion perception and the conception of static situations in terms of non-actual (loco)motion. This paper first critically discusses Zwarts’ explicit formal account of directionals, then introduces a Cognitivist attentional semantics and finally applies the Cognitivist approach to directionals. It will be shown that attention-based conceptual representations are necessary components in directional semantics and explanatory for the mentioned aspects.
... Rijkhoff, 2010, p. 223). In turn, these revisions and refinements provide Cognitive Science -of which cognitively-concerned Linguistics is a subfield (Jackendoff, 2007;Sinha, 2007)-with an improved understanding of the mental architecture underpinning language -or at the very least what that mental architecture must be able to account for (cf. Croft, 1998;Sandra, 1998;Taylor, 2012). ...
Thesis
Full-text available
This thesis is an empirical and theoretical contribution to the study of Adverb-Nominal Degree Constructions (ANDCs) –adverbial degree constructions featuring nominal forms rather than adjectives (e.g. That is so you; This bar is very San Francisco). Situated broadly within the framework of Cognitive Linguistics, our study –the first large corpus-based investigation into ANDCs— investigates the expressed meaning of 4 proper names (1,500+ usage events) from four ontological categories: PLACE, TIME, PEOPLE, and FILM. While several competing models have already been proposed to handle ANDCs, three of our empirical findings highlight the need for an alternate account. Firstly, there are no grounds on which to claim that proper names in ANDCs are necessarily adjectival, as 1) almost all classic diagnostics for adjectivehood actually admit true N(P)s; and 2) proper names in ANDCs exhibit nouny characteristics (e.g. anaphoric binding). Secondly, ANDCs yield interpretations that cannot be accounted for by existing models. In addition to comparison (e.g. Your smile is very Mona Lisa), ANDCs express typicality (e.g. Pizza is very New York), inclination (e.g. I am in a very Harry Potter mood), and quantification (e.g. 2017 has been very Kurt Cobain), amongst others. Lastly, far from lexicalizing, proper names are exploited in ANDCs for their encyclopaedic potential, typically being used to metonymically evoke virtually any knowledge structure (gradable or otherwise) in the nominal sign’s encyclopaedic network (e.g. very Harry Potter  locations / characters / props / weather / music / plot points from the Harry Potter films). We reconcile these observations by proposing that true N(P)s can participate in ANDCs as 1) access points to knowledge networks that 2) become associated with a meaningful, gradable, pragmatic scale R during the process of conceptual combination. It is R that is intensified rather than the N(P) itself.
... Critics such as Jackendoff (2002Jackendoff ( , 2007 and Stokhof and van Lambalgen (2011) have taken issue with the "hardening" of this distinction over time to the point of the possible irreconcilability of competence models of grammar with processing constraints related to performance (see Lobina 2017 for a bridging proposal in line with Minimalism in generative grammar). The former even offers an alternative generative approach, the Parallel Architecture, in order to bridge this gap and integrate linguistics more firmly within the cognitive sciences. ...
Article
Full-text available
This article surveys the philosophical literature on theoretical linguistics. The focus of the paper is centred around the major debates in the philosophy of linguistics, past and present, with specific relation to how they connect to the philosophy of science. Specific issues such as scientific realism in linguistics, the scientific status of grammars, the methodological underpinnings of formal semantics, and the integration of linguistics into the larger cognitive sciences form the crux of the discussion.
... A natural language is a structured symbolic system consisting of stored lexical items and combinatorial rules or principles (Jackendoff, 2007). Some cognitive scientists have proposed that these structural properties reflect those of an underlying amodal representational system often referred to as a language of thought (Fodor, 1975). ...
Article
Full-text available
What role does language play in our thoughts? A longstanding proposal that has gained traction among supporters of embodied or grounded cognition suggests that it serves as a cognitive scaffold. This idea turns on the fact that language—with its ability to capture statistical regularities, leverage culturally acquired information, and engage grounded metaphors—is an effective and readily available support for our thinking. In this essay, I argue that language should be viewed as more than this; it should be viewed as a neuroenhancement. The neurologically realized language system is an important subcomponent of a flexible, multimodal, and multilevel conceptual system. It is not merely a source for information about the world but also a computational add-on that extends our conceptual reach. This approach provides a compelling explanation of the course of development, our facility with abstract concepts, and even the scope of language-specific influences on cognition.
... Psycho-/neurolinguistics has developed complex experimental designs and techniques that have made it possible to study linguistic processing with millisecond and voxel precision (Bornkessel-Schlesewsky and Schlesewsky, 2009;Friederici, 2011). Unfortunately, the dialogue between the two disciplines has not always been constant, Q21 with theoretical linguistics often proceeding without drawing on experimental results, and psycho-/neurolinguistics limitedly relying on linguistic theory (see discussion in Ferreira, 2005;Poeppel and Embick, 2005;Jackendoff, 2007;Embick and Poeppel, 2015). This has resulted in a sharp separation between the formal/computational level of linguistic analysis and the functional/neuro-anatomical investigation of language, limiting the depth with which we can investigate what we know and what we do with language. ...
Article
Full-text available
... Such frameworks include the Parallel Architecture (Culicover & Jackendoff 2005;Jackendoff 2002), HPSG (Pollard & Sag 1994), and Construction Grammar (Goldberg 1995). 10 We focus here on the Parallel Architecture (Culicover & Jackendoff 2005;Jackendoff 2002); see Jackendoff (2007) for an accessible and psycholinguistically oriented discussion. This framework assumes separate generative capacities for semantics, syntax, and phonology, and proposes that they are linked via interfaces, or mappings, that involve input from the lexicon. ...
Article
Structural priming is poorly understood and cannot inform accounts of grammar for two reasons. First, those who view performance as grammar + processing will always be able to attribute psycholinguistic data to processing rather than grammar. Second, structural priming may be simply an example of hysteresis effects in general action planning. If so, then priming offers no special insight into grammar.
... Such frameworks include the Parallel Architecture (Culicover & Jackendoff 2005;Jackendoff 2002), HPSG (Pollard & Sag 1994), and Construction Grammar (Goldberg 1995). 10 We focus here on the Parallel Architecture (Culicover & Jackendoff 2005;Jackendoff 2002); see Jackendoff (2007) for an accessible and psycholinguistically oriented discussion. This framework assumes separate generative capacities for semantics, syntax, and phonology, and proposes that they are linked via interfaces, or mappings, that involve input from the lexicon. ...
Article
Within the cognitive sciences, most researchers assume that it is the job of linguists to investigate how language is represented, and that they do so largely by building theories based on explicit judgments about patterns of acceptability – whereas it is the task of psychologists to determine how language is processed, and that in doing so, they do not typically question the linguists’ representational assumptions. We challenge this division of labor, by arguing that structural priming provides an implicit method of investigating linguistic representations that should end the current reliance on acceptability judgments. Moreover, structural priming has now reached sufficient methodological maturity to provide substantial evidence about such representations. We argue that evidence from speakers’ tendency to repeat their own and others’ structural choices supports a linguistic architecture involving a single ‘shallow’ level of syntax that is connected to a semantic level containing information about quantification, thematic relations, and information structure, as well as to a phonological level. Many of the linguistic distinctions that are often used to support complex (or multi-level) syntactic structure are instead captured by semantics; however, the syntactic level includes some specification of ‘missing’ elements that are not realized at the phonological level. We also show that structural priming provides evidence about the consistency of representations across languages and about language development. In sum, we propose that structural priming provides a new basis for understanding the nature of language.
... Due to progress in developing various experimental techniques and in broadening methodological approaches in the last decades, more research efforts were initiated to compare structures and computations across different cognitive subsystems rather than focusing on a single cognitive domain. For example, the computations underlying phonological and lexical syntax in the language domain seemed to be shared with other human-specific cognitive domains to various degrees: rhythmic synchronization in music (e.g., Patel & Daniele, 2003;Patel, 2003;Patel, 2008); hierarchical structures in music (e.g., Koelsch, Rohrmeier, Torrecuso & Jentschke, 2013), vision (e.g., Jackendoff, 2007a;Gershman, Tenenbaum & Jäkel, 2015) and action perception (e.g., Wakita, 2014); binary structures in arithmetic (e.g., Bender & Beller, 2013). In considering that different subsystems of the human mind operate on different types of discrete symbolic elements along with different sensory-motor systems, it remains to be seen, which linguistic principles belong to the biological capacity and are hardwired properties of the human-specific neural architecture (e.g., Fadiga, Craighero & D'Ausilio, 2009;Jackendoff, 2009;Stout, 2010;Heinz & Idsardi, 2011;Hurford, 2011;Pesetsky & Katz, 2011;Arbib, 2012;Fitch & Martins, 2014). ...
... For example, argument R has been used (Pietroski 2000, McGilvray 1998, Jackendoff 2002, 2007 to support the internalist view according to which semantics consist of some kind of relation between a mental lexicon and a class of mental representations of some kind, and that meanings are some form of syntactically individuated entities (see e.g. Chomsky 1992). ...
Research
Full-text available
Abstract Can natural language reference be naturalised? I shall argue that this question allows a positive as well as a negative answer. In one sense, the possibility of giving an account of the relation of reference within a naturalistic theory seems doomed to failure, because this would require full acceptance of the entire common-sense ontology within a naturalistic view of the world. On the other hand, in a weaker sense, it seems plausible that we can give a naturalistic account of the referential abilities of speakers. Starting from this analysis, I conclude with some considerations on a possible role for the Turing Test and, more generally, for the methodology of simulations in cognitive science.
... In its contemporary interdisciplinary context (Jackendoff 2002(Jackendoff , 2007a(Jackendoff , 2007b, linguistics plays a pivotal role in the numerous challenges that lie ahead of computational social science. After all, in 2007, a third of all digital data comprised of text (Hilbert 2014). ...
Article
Full-text available
The paper explores the importance of closer interaction between data science and evolutionary linguistics, pointing to the potential benefits for both disciplines. In the context of big data, the microblogging social networking service – Twitter – can be treated as a source of empirical input for analyses in the field of language evolution. In an attempt to utilize this kind of disciplinary interplay, I propose a model, which constitutes an adaptation of the Iterated Learning framework, for investigating the glossogenetic evolution of sublanguages.
... A natural language is a structured symbolic system that involves a systematic mapping between a virtually unbounded set of thoughts and a virtually unbounded set of sounds or manual gestures. Given our limited cognitive abilities, it is common to explain linguistic competence in terms of a finite set of stored lexical units or complexes and combinatorial principles (Jackendoff, 2007). Classical non-embodied cognitive science has tended to view the symbolic nature of language as reflective of the computational properties of an underlying amodal representational system (Fodor, 1975). ...
Article
Full-text available
Recently, there has been a great deal of interest in the idea that natural language enhances and extends our cognitive capabilities. Supporters of embodied cognition have been particularly interested in the way in which language may provide a solution to the problem of abstract concepts. Toward this end, some have emphasized the way in which language may act as form of cognitive scaffolding and others have emphasized the potential importance of language-based distributional information. This essay defends a version of the cognitive enhancement thesis that integrates and builds on both of these proposals. I argue that the embodied representations associated with language processing serve as a supplementary medium for conceptual processing. The acquisition of a natural language provides a means of extending our cognitive reach by giving us access to an internalized combinatorial symbol system that augments and supports the context-sensitive embodied representational systems that exist independently of language.
... I am aware of the indulgence I ask of the reader. All references to Jackendoff are to Jackendoff (2007) unless otherwise noted. My thanks to Georges Rey for helpful comments on a draft, to Nancy Ritter for organizing these exchanges, and especially to Ray Jackendoff for the instruction and stimulation his work has provided and continues to provide. ...
Article
Full-text available
In this note, I clarify the point of my paper "The Nature of Semantics: On Jackendoff's Arguments" (NS) in light of Ray Jackendoff's comments in his "Linguistics in Cognitive Science: The State of the Art." Along the way, I amplify my remarks on unification.(1)
... For example, according to Tomasello (2005: 186), "children construct from their experience with a particular language some kinds of grammatical categories, based on the function of particular words and phrases in particular utterances -followed by generalizations across these" (see also Bybee and Mc-Clelland 2005;Goldberg and Del Giudice 2005). As Jackendoff (2007) points out, this hypothesis must be correct at some level -children have to process and glean patterns from the input they receive in order to learn the language of their community. The controversial question is whether children bring biases to their input that influence the generalizations they make. ...
Article
Usage-based accounts of language-learning ought to predict that, in the ab- sence of linguistic input, children will not communicate in language-like ways. But this prediction is not borne out by the data. Deaf children whose hearing losses prevent them from acquiring the spoken language that surrounds them, and whose hearing parents have not exposed them to a conventional sign lan- guage, invent gesture systems, called homesigns, that display many of the prop- erties found in natural language. Children thus have biases to structure their communication in language-like ways, biases that reflect their cognitive skills. But why do the deaf children recruit this particular set of cognitive skills, and not others, to their homesign systems? In other words, what determines the bi- ases children bring to language-learning? The answer is clearly not linguistic input.
Chapter
Full-text available
What is the remit of theoretical linguistics? How are human languages different from animal calls or artificial languages? What philosophical insights about language can be gleaned from phonology, pragmatics, probabilistic linguistics, and deep learning? This book addresses the current philosophical issues at the heart of theoretical linguistics, which are widely debated not only by linguists, but also philosophers, psychologists, and computer scientists. It delves into hitherto uncharted territory, putting philosophy in direct conversation with phonology, sign language studies, supersemantics, computational linguistics, and language evolution. A range of theoretical positions are covered, from optimality theory and autosegmental phonology to generative syntax, dynamic semantics, and natural language processing with deep learning techniques. By both unwinding the complexities of natural language and delving into the nature of the science that studies it, this book ultimately improves our tools of discovery aimed at one of the most essential features of our humanity, our language.
Book
How can we unravel the evolution of language, given that there is no direct evidence about it? Rudolf Botha addresses this intriguing question in his fascinating new book. Inferences can be drawn about language evolution from a range of other phenomena, serving as windows into this prehistoric process. These include shell-beads, fossil skulls and ancestral brains, modern pidgin and creole languages, homesign systems and emergent sign languages, modern motherese, language use of modern hunter-gatherers, first language acquisition, similarities between language and music, and comparative animal behaviour. The first systematic analysis of the Windows Approach, it will be of interest to students and researchers in many disciplines, including anthropology, archaeology, linguistics, palaeontology and primatology, as well as anyone interested in how language evolved.
Article
Human reasoning goes beyond knowledge about individual entities, extending to inferences based on relations between entities. Here we focus on the use of relations in verbal analogical mapping, sketching a general approach based on assessing similarity between patterns of semantic relations between words. This approach combines research in artificial intelligence with work in psychology and cognitive science, with the aim of minimizing hand coding of text inputs for reasoning tasks. The computational framework takes as inputs vector representations of individual word meanings, coupled with semantic representations of the relations between words, and uses these inputs to form semantic-relation networks for individual analogues. Analogical mapping is operationalized as graph matching under cognitive and computational constraints. The approach highlights the central role of semantics in analogical mapping.
Thesis
Cette thèse porte sur la computation phonologique et son statut en tant que cognition. Elle demande à quoi ressemble une computation phonologique, comment les motifs possibles de sons linguistiques sont contraints par des faits du monde physique, et dans quelle mesure ces faits sont récapitulées dans la phonologie elle-même. Les langues du monde présentent une variété de motifs phonologiques dynamiques. Certains sont relativement courants et se reproduisent fréquemment dans toutes les familles linguistiques. D'autres motifs sont moins fréquents, se produisant rarement ou dans des cas isolés. La notation formelle utilisée dans les théories phonologiques—censée incarner quelque chose de réel qui se passe dans le cerveau humain—peut être utilisé pour exprimer des schémas d'alternance et rares et fréquemment récurrents. Elle peut en principe être utilisés pour exprimer encore d'autres sortes d'alternances qui semblent ne jamais se produire, ou même des alternances qui sont impossibles dans le langage humain. L'une des questions les plus importantes de la théorie de la phonologie, posée par exemple par Reiss 2003: 335, est qu'est-ce qu'une règle possible ? Ou, au-delà des grammaires phonologiques, qu'est-ce qu'une langue possible ? Cette thèse explore ces questions à la lumière d'alternances phonétiquement arbitraires, appelées règles folles (Bach & Harms 1972). Les alternances phonologiques sont une propriété universelle des langues humaines naturelles ; tandis que beaucoup des alternances phonologiques sont facilement énoncées en termes phonétiques—typiquement articulatoires—les règles folles semblent n'obéir à aucun principe de phonétique articulatoire. En termes computationaux, les règles folles sont arbitraires car à partir de tout input X, tout output Y peut être produit. Cette thèse fournit un aperçu de la naturalité dans la théorie phonologique—un outil utilisé avec l'ambition de fournir à la théorie phonologique un pouvoir explicatif et prédictif. Une étude de 31 règles folles suggère qu'il n'y a pas de naturalité en phonologie. L'enquête permet de faire une observation importante : les règles folles ne sont jamais folles qu'au niveau segmental. Bien qu'il existe des motifs d'alternance phonétiquement arbitraires, il n'existe pas de motif d'alternance folle qui dépende de la structure syllabique ou d'une autre structure suprasegmentaire. La thèse considère ensuite la théorie phonologique à la lumière de règles folles. Elle soutient que la naturalité formelle est confrontée à de graves problèmes conceptuels et empiriques, faisant un certain nombre de fausses prédictions sur les langues possibles et impossibles. Elle suggère une explication possible de la rareté des règles folles qui repose sur des faits extérieurs à la phonologie—des faits sur l'articulation, la perception et l'évolution des langues humaines. Excluant le naturel, la thèse soutient que la Phonologie Sans Substance (Hale & Reiss 2008) fournit un moyen prometteur de conceptualiser l'enquête sur les propriétés computationnelles de la phonologie. Clôturant la partie théorique, la thèse propose une analyse sans substance d'un motif complexe d'alternances en sarde campidanais. Le modèle utilise à bon escient un domaine mélodique distinct et un domaine suprasegmentaire distinct. Dans le premier cas, la folie est possible et donc les computations mélodiques arbitraires le sont aussi. Dans ce dernier, la folie est impossible et la lénition et la fortification sont des processus phonologiques parfaitement réguliers. La thèse propose une revue des expériences d'Apprentissage de Grammaire Artificielle. Bien qu'il existe des preuves substantielles d'un biais de complexité dans l'apprentissage des modèles phonologiques en laboratoire, il existe des preuves beaucoup plus faibles d'un biais de naturalité. Enfin, cette thèse présente un protocole d'expérience EEG qui peut être utilisé pour sonder les alternances et les comparer à la computation phonologique.
Presentation
Full-text available
COGS 149 01: Music, Language, and Cognition (UC Merced, Fall 2021, Syllabus)
Book
Cognitive linguists are bound by the cognitive commitment, which is the commitment to providing a characterization of the general principles governing all aspects of human language, in a way that is informed by, and accords with, what is known about the brain and mind from other disciplines. But what do we know about aspects of cognition that are relevant for theories of language? Which insights can help us build cognitive reality into our descriptive practice and move linguistic theorizing forward? This unique study integrates research findings from across the cognitive sciences to generate insights that challenge the way in which frequency has been interpreted in usage-based linguistics. It answers the fundamental questions of why frequency of experience has the effect it has on language development, structure and representation, and what role psychological and neurological explorations of core cognitive processes can play in developing a cognitively more accurate theoretical account of language.
Article
In this paper I try to establish bidirectional links between the grammar and the processing (especially production) of agreement in order to provide the broad strokes of a psychologically viable theory of agreement. I start by arguing that full encapsulation and full interactivity in agreement operations are not realistic options. The question therefore becomes how much of each should be posited on principled grounds. It is further argued that in language production agreement 'leaks', in the sense that conceptual structure is ready to interfere in the establishment of agreement ties, and that that interference is neatly modulated by morphological strength, in the sense that morphology acts a barrier to it. I suggest a series of components that a theory of agreement must contain if it is to be psyscholinguistically realistic. Among these: A. constant conceptual pressure (leaking) and varying degrees of morphologisation both inter- A nd intra-linguistically (blocking); b. constructional 'listing', which adds to the division of labour between direct semantic influence and encapsulated feature transmission; c. 'Avalanching' (a chain reaction of chunking), which results in near encapsulation in practice in the minds of speakers of languages with a rich morphology; d. a process of 'Match and Check', which ensures automatic, non-strategic computations of massive feature redundancy (Match) without the need for accompanying access to conceptual structure (Check).
Method
Full-text available
This file serves to provide a comprehensive (self-archived) bibliography of references related to working memory and language learning (including first and second language acquisition and processing). I am putting this up here in the hope that it can help those researchers and students interested in similar research topics as I am (as enthusiastic with working memory and language as I am). I will try my best to update this regularly (so keep an eye on it!). Enjoy!
Article
Full-text available
With a particular reference to second language (L2), we discuss (1) how structural priming can be used to tap into L2 representations and their relationships with first and target language representations; and (2) how complex networks additionally can be used to reveal the global and local patterning of L2 linguistic features and L2 developmental trajectories.
Chapter
One vision of the nature of language holds that a language consists of a set of symbolic unit types, and a set of units of each type, together with a set of grammatical principles that constrain how these units can be used to compose other units, and a system of rules that project structured arrangements of such units onto other structured arrangements of units. This chapter reviews this vision and the use of distributed neural networks to capture it, covering motivations for the approach based on phenomena of language, some extant models using the approach, and prospects for the further development of this approach to understanding the emergence of language. The presence of quasi-regularity as well as sub-regularity in the English past tense challenges the approach of characterizing language knowledge as a system of rules.
Article
This paper is an attempt to tackle the idea of opportunism in language processing seriously – and its implications for language theory if one is to avoid what Poeppel and Embick (2005) call “interdisciplinary cross-sterilization”, that is the failure of linguistics and psycholinguistics to communicate with each other. It is also an attempt to force a deeper reflection on 1) the shape of a viable and useful theory of language, and 2) the relation between (and respective place of) linguistics and experimental psycholinguistics in the study of language. Towards that, I review a number of psycholinguistic findings with a view to showing how routinely parsers opt for opportunistic (as opposed to ‘elegant’) wayouts from processing dilemmas. Most of the evidence reviewed involves research of a cross-linguistic type, the common thread being that different languages resort to different solutions to the same processing problems, even when a unitary solution to at least many of these problems would be computationally within easy reach. The main purpose of this review is to provide a quantitatively suggestive account of how massively opportunism works in setting processing biases. Based on it, I go on to suggest that grammars can only be psychologically viable if they incorporate a fairly large number of interacting constraints, a default ability to generate pieces of structure without a commitment to satisfy large-scale well-formedness conditions, and no strict, fixed ordering of operations. These observations are compatible with a view of language as a complex, dynamical system of co-adapted traits, a system containing a fairly large number of possible initial states and a fairly large number of functionally optimal (opportunistic) continuations of those states. This work assumes the merits of espousing psychological adequacy.
Article
Full-text available
Contextual flexibility has been the focus of considerable research in cognitive science. The pessimistic view expressed by Fodor on this issue has been challenged either by modular approaches or by proposals based on common codes/spaces where information can be integrated. I analyse these views and the middle-ground approach explored by SHANAHAN and BAARS (2005), and propose a different non-modular account. The general idea is that flexible integration of information is essentially ensured thanks to the hierarchical and schematic organization of memory, and to the bottom-up/top-down dynamic of its associative activation. In this perspective, context is not a crucial part of the problem, it is instead key to the solution: the different inputs in a context activates schemata that compete and integrate with each other, so that the schemata that are the most coherent with the context will be the most activated as well. I also consider the role that might be played by consciousness in this process, especially with regard to cases of extreme flexibility, that is, cases in which creative thoughts are formed.
Chapter
This chapter reviews sociophonetic evidence for the emergence of linguistic knowledge: that is, how knowledge about linguistic units and processes are shaped by socialisation, and by speaking and listening in social contexts. We focus in particular on developing research areas, highlighting provocative findings that have wide-ranging implications for our understanding of the cognitive representation of language. We consider how sociophonetic associations emerge at the individual and group levels, and how we can model the combined properties of linguistic and indexical information. We also present a critical discussion of some issues in exemplar theory as the framework currently best placed to model the interaction between linguistic and indexical knowledge.
Article
Full-text available
This book presents new work on how Merge and formal features, two Basic factors in the Minimalist program, should determine the syntactic computation of natural language. Merge combines similar objects into more complex ones. Formal features establish dependencies within objects. This book examines the intricate ways in which these two factors interact to generate well-formed derivations in natural language. It is divided into two parts concerned with formal features and interpretable features - a subset of formal features. The book combines grammatical theory with the analysis of data drawn form a wide range of languages, both in the adult grammar and in first language acquisition. The mechanisms at work in linguistic computation are considered in relation to a variety of linguistic phenomena, including A-binding, A'-dependencies and reconstruction, agreement, word order, adjuncts, pronouns, and complementizers. © 2009 organization and editorial matter José M. Brucart, Anna Gavarró, and Jaume Solà © 2009 the chapters their various authors. All rights reserved.
Thesis
Full-text available
The experimental work and the theoretical model presented in this thesis explore the behavior of the sentence production system in perceptually, conceptually, and syntactically changing environments across languages. Nine experiments examine how speakers of different languages integrate available perceptual, conceptual, and syntactic information during production of sentences. Such integration occurs under the global control of canonical causality and automated syntax. Analysis of speakers’ performance in perceptually manipulated setting demonstrated that perceptual motivations for word order alternation are relatively weak and limited to the initial event apprehension. In addition, salience-driven choices of word order are realized differently in different syntactic structures and in languages with different grammatical systems. Combining perceptual and conceptual priming paradigms did not substantially improve cueing efficiency. Contrastingly, early availability of lexical and syntactic information led to the most consistent alternation of the word order. I conclude that the uptake of perceptual information does not directly influence structural processing. General cognitive processes, such as attentional control and higher memorial activation actively contribute to the concept’s accessibility status, but the syntactic organization of a spoken sentence constitutes a relatively independent psychological reality that can be realized partially as a product of the aforementioned operations but does not directly depend on them.
Article
The aim of the present paper is to understand what the notions of explanation and prediction in contemporary linguistics mean, and to compare various aspects that the notion of explanation encompasses in that domain. The paper is structured around an opposition between three main styles of explanation in linguistics, which I propose to call ‘grammatical’, ‘functional’, and ‘historical’. Most of this paper is a comparison between these different styles of explanations and their relations (Sections 3, 4, 7, and 8). A second, more methodological aspect this paper seeks to clarify concerns the extent to which linguistic explanations can be viewed as predictive, rather than merely descriptive (Sections 2, 5, and 6), and the problem of whether linguistic explanations ought to be causal, rather than noncausal (Section 6). I argue that the notion of prediction is equally applicable in linguistics as in other empirical sciences. The extent to which the computational model of generative syntax can be viewed as providing a causal or psychologically realist model of language is more controversial (Sections 5–9).
Article
Full-text available
This paper provides a unified account of English subject-auxiliary inversion (SAI). It argues that SAIs, as have been called in the literature, belong to two semantically distinctive constructions. The first is the Auxiliary Subject Construction (ASC), one that merely reverses the subject and auxiliary order, without the fronting of another unit. It functions to mark non-indicative moods. The second SAI construction is the X Auxiliary Subject Construction (XASC), in which the auxiliary-subject (AS) order is accompanied by the fronting of a unit from its original, post-subject position in the canonical, SV order sentence. The XASC serves a different purpose from the ASC, i.e., to focus the fronted unit. As such, it shares both structural and functional affinity with full-verb inversion (Chen 2003), which is referred to hereafter as the X Verb Subject Construction (XVSC) for sake of consistency. The second purpose of this study is to address the issue of invertability of the subject auxiliary/verb order. Drawing on Deane (1992), I propose an Invertability Hypothesis, which applies to both the XASC and the XVSC. On this hypothesis, invertability depends on the strength of the linkage between the fronted unit and the auxiliary/verb that exists in the canonical sentence. The stronger the link, the more likely the order of the subject and the auxiliary/verb will be inversed once the unit is fronted. With this analysis - one that is decidedly different from previous accounts (e.g. Goldberg 2006) - I intend to demonstrate that the functional/cognitive approach to language is indeed capable of handling a complex construction such as inversion, the generalization of which generative linguists believe can only be stated formally (Newmeyer 1998; Borsley and Newmeyer 2009; Lidz and Williams 2009).
Article
This paper focuses on the linguistic evidence base provided by proponents of conceptualism (e.g., Chomsky) and rational realism (e.g., Katz) and challenges some of the arguments alleging that the evidence allowed by conceptualists is superior to that of rational realists. Three points support this challenge. First, neither conceptualists nor realists are in a position to offer direct evidence. This challenges the conceptualists’ claim that their evidence is inherently superior. Differences between the kinds of available indirect evidence will be discussed. Second, at least some of the empirical evidence provided by the conceptualist is flawed. It is not obtained independently of theoretical commitments, alternative interpretations have not been ruled out, and some of the thought experiments intended to extend the evidence base are conceptually flawed. Third, the widely held assumption that rational realism disallows empirical evidence relevant to linguistics is dubious. It will be shown that the limitation imposed by rational realism concerns strictly formal linguistics. The rationalist realist has no reason to impose any restriction on the evidence relevant to psycholinguistics. I conclude that it is a mistake to dismiss realism based on the assumption that it imposes undue restrictions on evidence that is relevant to linguistics.
Article
Our target article proposed that language production and comprehension are interwoven, with speakers making predictions of their own utterances and comprehenders making predictions of other people's utterances at different linguistic levels. Here, we respond to comments about such issues as cognitive architecture and its neural basis, learning and development, monitoring, the nature of forward models, communicative intentions, and dialogue.
Article
This article analyses the discursive and rhetorical strategies that help promote an image of competence in the corporate histories provided by the webpages of the top 25 companies ranked in the 2008 Fortune Most Admired 500. The analysis sheds light on the lexicogrammatical features deployed within a temporally structured framework. Furthermore, the linguistic approach is integrated with the modalities of visual semiotics. Specifically, referring to Lemke’s “trifunctional” theoretical framework, this article discusses the mapping of organizational meanings as competence-oriented thematic pathways developed cross-modally. Against this background, it is argued that the identification of the construals of time can be particularly useful for the retrieval of meanings underlying the representation of competence as a process.
Article
Full-text available
This paper provides a concise overview of Constructions at Work (Goldberg 2006). The book aims to investigate the relevant levels of generalization in adult language, how and why generalizations are learned by children, and how to account for cross-linguistic generalizations.
Article
Full-text available
Although Pickering & Garrod (P&G) argue convincingly for a unified system for language comprehension and production, they fail to explain how such a system might develop. Using a recent computational model of language acquisition as an example, we sketch a developmental perspective on the integration of comprehension and production. We conclude that only through development can we fully understand the intertwined nature of comprehension and production in adult processing.
Article
Full-text available
Although we agree with Pickering & Garrod (P&G) that prediction-by-simulation and prediction-by-association are important mechanisms of anticipatory language processing, this commentary suggests that they: (1) overlook other potential mechanisms that might underlie prediction in language processing, (2) overestimate the importance of prediction-by-association in early childhood, and (3) underestimate the complexity and significance of several factors that might mediate prediction during language processing.
Article
Full-text available
We consider a computational model comparing the possible roles of "association" and "simulation" in phonetic decoding, demonstrating that these two routes can contain similar information in some "perfect" communication situations and highlighting situations where their decoding performance differs. We conclude that optimal decoding should involve some sort of fusion of association and simulation in the human brain.
Article
Full-text available
Fundamental to spatial knowledge in all species are the representations underlying object recognition, object search, and navigation through space. But what sets humans apart from other species is our ability to express spatial experience through language. This target article explores the language of objects and places, asking what geometric properties are preserved in the representations underlying object nouns and spatial prepositions in English. Evidence from these two aspects of language suggests there are significant differences in the geometric richness with which objects and places are encoded. When an object is named (i.e., with count nouns), detailed geometric properties - principally the object's shape (axes, solid and hollow volumes, surfaces, and parts) - are represented. In contrast, when an object plays the role of either "figure" (located object) or "ground" (reference object) in a locational expression, only very coarse geometric object properties are represented, primarily the main axes. In addition, the spatial functions encoded by spatial prepositions tend to be nonmetric and relatively coarse, for example, "containment," "contact," "relative distance," and "relative direction." These properties are representative of other languages as well. The striking differences in the way language encodes objects versus places lead us to suggest two explanations: First, there is a tendency for languages to level out geometric detail from both object and place representations. Second, a nonlinguistic disparity between the representations of "what" and "where" underlies how language represents objects and places. The language of objects and places converges with and enriches our understanding of corresponding spatial representations.
Article
Full-text available
We explore the capacity for music in terms of five questions: (1) What cognitive structures are invoked by music? (2) What are the principles that create these structures? (3) How do listeners acquire these principles? (4) What pre-existing resources make such acquisition possible? (5) Which aspects of these resources are specific to music, and which are more general? We examine these issues by looking at the major components of musical organization: rhythm (an interaction of grouping and meter), tonal organization (the structure of melody and harmony), and affect (the interaction of music with emotion). Each domain reveals a combination of cognitively general phenomena, such as gestalt grouping principles, harmonic roughness, and stream segregation, with phenomena that appear special to music and language, such as metrical organization. These are subtly interwoven with a residue of components that are devoted specifically to music, such as the structure of tonal systems and the contours of melodic tension and relaxation that depend on tonality. In the domain of affect, these components are especially tangled, involving the interaction of such varied factors as general-purpose aesthetic framing, communication of affect by tone of voice, and the musically specific way that tonal pitch contours evoke patterns of posture and gesture.
Article
Full-text available
The Chomskyan revolution in linguistics in the 1950s in essence turned linguistics into a branch of cognitive science (and ultimately biology) by both changing the linguistic landscape and forcing a radical change in cognitive science to accommodate linguistics as many of us conceive of it today. More recently Chomsky has advanced the boldest version of his naturalistic approach to language by proposing a Minimalist Program for linguistic theory. In this article, we wish to examine the foundations of the Minimalist Program and its antecedents and draw parallelisms with (meta-)methodological foundations in better-developed sciences such as physics. Once established, such parallelisms, we argue, help direct inquiry in linguistics and cognitive science/biology and unify both disciplines.
Article
Full-text available
Introduction On the construct 'Predicate' The Structure of Signs Morphology The Lexical-Functional Structure of Predicates With and Without Particles Modification Passive Causatives Middles References.
Article
Full-text available
The popularity of the study of language and the brain is evident from the large number of studies published in the last 15 or so years that have used PET, fMRI, EEG, MEG, TMS, or NIRS to investigate aspects of brain and language, in linguistic domains ranging from phonetics to discourse processing. The amount of resources devoted to such studies suggests that they are motivated by a viable and successful research program, and implies that substantive progress is being made. At the very least, the amount and vigor of such research implies that something significant is being learned. In this article, we present a critique of the dominant research program, and provide a cautionary perspective that challenges the belief that explanatorily significant progress is already being made. Our critique focuses on the question of whether current brain/language research provides an example of interdisciplinary cross-fertilization, or an example of cross-sterilization. In developing our critique, which is in part motivated by the necessity to examine the presuppositions of our own work (e.g. Embick, Marantz, Miyashita, O'Neil, Sakai, 2000; Embick, Hackl, Schaeffer, Kelepir, Marantz, 2001; Poeppel, 1996; Poeppel et al. 2004), we identify fundamental problems that must be addressed if progress is to be made in this area of inquiry. We conclude with the outline of a research program that constitutes an attempt to overcome these problems, at the core of which lies the notion of computation.
Article
Full-text available
If we accept the view that language first evolved from the conceptual struc- ture of our pre-linguistic ancestors, several questions arise, including: What kind of structure? Concepts about what? Here we review research on the vocal communication and cognition of nonhuman primates, focusing on results that may be relevant to the earliest stages of language evolution. From these data we conclude, first, that nonhuman primates' inability to represent the mental states of others makes their communication fundamentally different from hu- man language. Second, while nonhuman primates' production of vocalizations is highly constrained, their ability to extract complex information from sounds is not. Upon hearing vocalizations, listeners acquire information about their social companions that is referential, discretely coded, hierarchically struc- tured, rule-governed, and propositional. We therefore suggest that, in the ear- liest stages of language evolution, communication had a formal structure that grew out of its speakers' knowledge of social relations.
Article
Full-text available
Jackendoff defends a mentalist approach to semantics that investigates conceptual structures in the mind/brain and their interfaces with other structures, including specifically linguistic structures responsible for syntactic and phonological competence. He contrasts this approach with one that seeks to characterize the intentional relations between expressions and objects in the world. The latter, he argues, cannot be reconciled with mentalism. He objects in particular that intentionality cannot be naturalized and that the relevant notion of object is suspect. I critically discuss these objections, arguing in part that Jackendoff’s position rests on questionable philosophical assumptions.
Article
Full-text available
The major “contribution” of generative grammar to cognitive science is negative. The hermetic disjuncture of linguistic research from biological principles and facts has influenced cognitive science. Linguists have followed the pied piper taking a different path from that pointed out by Charles Darwin. As Dobzhansky (1973) noted, “Nothing in biology makes sense except in the light of evolution.” The hermetic nature of much linguistic research is apparent even in phonology which must reflect biological facts concerning speech production. For example, studies dating back to 1928 show that tongue “features” do not specify vowel distinctions. However, the irrefutable findings of these cineradiographic and MRI studies are generally ignored by linguists. Chomsky’s central premise, that syntactic ability derives from an innate “Universal Grammar” common to all human beings constitutes a strong biological claim. But if a UG genetically similar for all “normal” individuals existed, one of the central premises of Darwinian evolutionary biology, genetic variation would be false. Concepts and processes borrowed from linguistics such as “modularity” have impeded our understanding of brain-behavior relations. Some aspects of behavior are regulated in specific localized “modules” in the brain, but current research demonstrates that the neural architecture regulating human language is also implicated in motor control, cognition, and other aspects of behavior. The neural bases of enhanced human language are not separable from cognition and motor ability. The supposed unique aspect of syntax, its “reiterative” productivity, appears to derive from subcortical structures that play a part in neural circuits regulating motor control. Natural selection aimed at enhancing adaptive motor control ultimately yielded a basal ganglia “sequencing engine” that can produce a potentially infinite number of novel actions, thoughts , or “sentences” from a finite number of basic elements. Recent studies suggest that the human FOXP2 gene, which differs from similar regulatory genes in chimpanzees and other mammals, acts on the basal ganglia and other subcortical structures to confer enhanced human reiterative ability in domains as different as syntax and dancing. The probable date of the critical mutations on FOXP2 is coincident with the appearance of anatomically modern human beings about 150,000 to 200,000 years ago. Humans thus can create more complex sentences than chimpanzees, but has anyone ever seen an ape dancing?
Article
Full-text available
LANGUAGE, VOLUME 78, NUMBER 1 (2002)164 malist studies as the brief summary of the chapters has hopefully shown. All articles complement the work of the festschrift’s honoree, are well-written, and contain interesting data as well as intriguing analyses, pushing the minimalist spirit further ahead. REFERENCES BOSˇKOVIC´,ZˇELJKO.1994. D-structure, theta-criterion,and movement into theta-positions.Linguistic Analysis 24.247–86. MM. 1997. Superiority effects with multiple wh-fronting in Serbo-Croatian. Lingua 102.1–20. CHOMSKY,NOAM. 1995. The minimalist program. Cambridge, MA: MIT Press. MM.2001.Derivationbyphase.KenHale:Alifeinlanguage,ed.byMichaelKenstowicz,1–52.Cambridge, MA: MIT Press. GRIMSHAW,JANE, and ARMIN MESTER. 1988. Light verbs and H9258-marking. Linguistic Inquiry 19.205–32. HORNSTEIN,NORBERT. 1995. Logical form: From GB to minimalism. Oxford: Blackwell. KAYNE,RICHARD S. 1994. The antisymmetry of syntax. Cambridge, MA: MIT Press. ZAS Ja¨gerstr. 10–11 10117 Berlin Germany [Kleanthes@punksinscience.org] The mind doesn’t work that way: Thescopeand limits of computational psychology. By JERRY FODOR. Cambridge, MA: MIT Press, 2000. Pp. 126. Reviewed by RAY JACKENDOFF, Brandeis University* As has been his wont in recent years, Jerry Fodor offers here a statement of deepest pessimism about the possibility of doing cognitive science except in a very limited class of subdomains. F is of course justly celebrated for at least two major ideas in cognitive science: The language of thought (Fodor 1975) and the modularity hypothesis (Fodor 1983). However, the form in which these ideas have been taken enthusiastically into the lore of the field differs in some important respects from the form in which F couched them and in which he still believes. As I hope to show, the tension between F’s actual views and those generally attributed to him plays a major rolein theposition headvocates here. Here is a summary of F’s argument, as best as I can reconstruct it. The central issue is the problem of ‘abduction’: how one determines the truth of a proffered proposition and its consistency with one’s beliefs. The chief obstacle to successful abduction is that meaning is holistic: One must potentially check the proffered proposition and inferences from it against one’s entire network of belief/knowledge. The resulting combinatorial explosion makes it impossible to reliably fix new beliefs and plan new actions within a traditional Turing-style computation. For F, this casts serious doubt on the computational theory of mind, which presumes Turing-style ‘symbolic’ computation over the syntactic form of mental representations. F dismisses a number of proposed solutions to the problem of abduction. Connectioniststyle computation, he maintains, is actually a step backward, since it cannot even capture the characteristic free combinatoriality of thought, an essential feature of the language of thought hypothesis. Here I concur; Marcus 2001 offers an extended argument to this effect. F also argues that a system of heuristics is unsatisfactory since one needs to perform an abduction to determine which heuristic to apply. I find this argument less convincing; we’ll return to it below. It isworth mentioningthat whenF speaksof ‘Turing-stylecomputation’, itis not clearwhether he intends to include massively parallel ‘symbolic’ computation. Such computation is perhaps * I am grateful to David Olson, Jerry Samet, and especially Merrill Garrett for considerable discussion concerning this book. This work was supported in part by a Fellowship at the Wissenschaftskolleg zu Berlin and in part by NIH Grant DC 03660 to Brandeis University. REVIEWS 165 mathematically equivalent to serial Turing-style computation, but it is quite different in practical terms. Certainly the brain’s form of computation is massively parallel, whether connectionist or symbolic or some combination thereof. It is interesting therefore to ask if such computation is of any practical help in solving the combinatorial explosion of abduction; F does not address this question. F’s dismissals of connectionism and heuristics, however, are just warmups for his principal line of attack. This is aimed against the thesis of ‘massive modularity’ proposed by such people as Pinker (1997) and Cosmides and Tooby (1992): the idea that the entire mind (or most of it, anyway) is made up of innate domain-specific modules. Most everybody seems to consider this a natural extension...
Article
Full-text available
Spatial orientation and direction are core areas of human and animal thinking. But, unlike animals, human populations vary considerably in their spatial thinking. Revealing that these differences correlate with language (which is probably mostly responsible for the different cognitive styles), this book includes many cross-cultural studies investigating spatial memory, reasoning, types of gesture and wayfinding abilities. It explains the relationship between language and cognition and cross-cultural differences in thinking to students of language and the cognitive sciences. © Stephen C. Levinson 2004 and Cambridge University Press, 2010.
Article
Full-text available
It is argued that the principles needed to explain linguistic behavior are do- main-general and based on the impact that specific experiences have on the mental organization and representation of language. This organization must be sensitive to both specific information and generalized patterns. In addition, knowledge of language is highly sensitive to frequency of use: frequently-used linguistic sequences become more frequent, more accessible and better inte- grated. The evidence adduced is mainly from phonology and morphology and addresses the issue of gradience and specificity found in postulated units, cat- egories, and dichotomies such as regular and irregular, but the points apply to all levels of linguistic analysis including the syntactic, semantic, and discourse levels. Appropriate models for representing such phenomena are considered, including exemplar models and connectionist models, which are evolving to achieve a better fit with linguistic data. The major criticism of connectionist models often raised from within the combinatorial paradigm of much existing linguistic theory - that they do not capture 'free combination' to the extent that rule-based systems do, is regarded as a strength rather than a weakness. Re- cent connectionist models exhibit greater productivity and systematicity than earlier variants, but still show less uniformity of generalization than combina- torial models do. The remaining non-uniformity that the connectionist models show is appropriate, given that such non-uniformity is the rule in language structure and language behavior.
Article
Full-text available
Ambiguity resolution is a central problem in language comprehension. Lexical and syntactic ambiguities are standardly assumed to involve different types of knowledge representations and be resolved by different mechanisms. An alternative account is provided in which both types of ambiguity derive from aspects of lexical representation and are resolved by the same processing mechanisms. Reinterpreting syntactic ambiguity resolution as a form of lexical ambiguity resolution obviates the need for special parsing principles to account for syntactic interpretation preferences, reconciles a number of apparently conflicting results concerning the roles of lexical and contextual information in sentence processing, explains differences among ambiguities in terms of ease of resolution, and provides a more unified account of language comprehension than was previously available.
Article
Full-text available
Chomsky and Halle (1968) and many formal linguists rely on the notion of a universally available phonetic space defined in discrete time. This assumption plays a central role in phonological theory. Discreteness at the phonetic level guarantees the discreteness of all other levels of language. But decades of phonetics research demonstrate that there exists no universal inventory of phonetic objects. We discuss three kinds of evidence: first, phonologies differ incommensurably. Second, some phonetic characteristics of languages depend on intrinsically temporal patterns, and, third, some linguistic sound categories within a language are different from each other despite a high degree of overlap that precludes distinctness. Linguistics has mistakenly presumed that speech can always be spelled with letter-like tokens. A variety of implications of these conclusions for research in phonology are discussed.
Chapter
This book makes a fundamental contribution to phonology, linguistic typology, and the nature of the human language faculty. Distinctive features in phonology distinguish one meaningful sound from another. Since the mid-twentieth century they have been seen as a set characterizing all possible phonological distinctions and as an integral part of Universal Grammar, the innate language faculty underlying successive versions of Chomskyan generative theory. The usefulness of distinctive features in phonological analysis is uncontroversial, but the supposition that features are innate and universal rather than learned and language-specific has never, until now, been systematically tested. In his pioneering account Jeff Mielke presents the results of a crosslinguistic survey of natural classes of distinctive features covering almost six hundred of the world's languages drawn from a variety of different families. He shows that no theory is able to characterize more than 71 per cent of classes, and further that current theories, deployed either singly or collectively, do not predict the range of classes that occur and recur. He reveals the existence of apparently unnatural classes in many languages. Even without these findings, he argues, there are reasons to doubt whether distinctive features are innate: for example, distinctive features used in signed languages are different from those in spoken languages, even though deafness is generally not hereditary. The author explains the grouping of sounds into classes and concludes by offering a unified account of what previously have been considered to be natural and unnatural classes. The data on which the analysis is based are freely available in a program downloadable from the publisher's web site.
Article
We explore the capacity for music in terms of five questions: (1) What cognitive structures are invoked by music? (2) What are the principles that create these structures? (3) How do listeners acquire these principles? (4) What pre-existing resources make such acquisition possible? (5) Which aspects of these resources are specific to music, and which are more general? We examine these issues by looking at the major components of musical organization: rhythm (an interaction of grouping and meter), tonal organization (the structure of melody and harmony), and affect (the interaction of music with emotion). Each domain reveals a combination of cognitively general phenomena, such as gestalt grouping principles, harmonic roughness, and stream segregation, with phenomena that appear special to music and language, such as metrical organization. These are subtly interwoven with a residue of components that are devoted specifically to music, such as the structure of tonal systems and the contours of melodic tension and relaxation that depend on tonality. In the domain of affect, these components are especially tangled, involving the interaction of such varied factors as general-purpose aesthetic framing, communication of affect by tone of voice, and the musically specific way that tonal pitch contours evoke patterns of posture and gesture.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
"On The Definition of Word" develops a consistent and coherent approach to central questions about morphology and its relation to syntax. In sorting out the various senses in which the word "word" is used, it asserts that three concepts which have often been identified with each other are in fact distinct and not coextensive: listemes (linguistic objects permanently stored by the speaker); morphological objects (objects whose shape can be characterized in morphological terms of affixation and compounding); and syntactic atoms (objects that are unanalyzable units with respect to syntax).The first chapter defends the idea that listemes are distinct from the other two notions, and that all one can and should say about them is that they exist. A theory of morphological objects is developed in chapter two. Chapter three defends the claim that the morphological objects are a proper subset of the syntactic atoms, presenting the authors' reconstruction of the important and much-debated Lexical Integrity Hypothesis. A final chapter shows that there are syntactic atoms which are not morphological objects.Anne Marie Di Sciullo is in the Department of Linguistics at the University of Quebec. Edwin Williams is in the Department of Linguistics at the University of Massachusetts. "On The Definition of Word" is Linguistic Inquiry Monograph 14.
Article
Standard practice in linguistics often obscures the connection between theory and data, leading some to the conclusion that generative linguistics could not serve as the basis for a cognitive neuroscience of language. Here the founda- tions and methodology of generative grammar are clarified with the goal of explaining how generative theory already functions as a reasonable source of hypotheses about the representation and computation of language in the mind and brain. The claims of generative theory, as exemplified, e.g., within Chom- sky's (2000) Minimalist Program, are contrasted with those of theories en- dorsing parallel architectures with independent systems of generative phonol- ogy, syntax and semantics. The single generative engine within Minimalist ap- proaches rejects dual routes to linguistic representations, including possible extra-syntactic strategies for semantic structure-building. Clarification of the implications of this property of generative theory undermines the foundations of an autonomous psycholinguistics, as established in the 1970's, and brings linguistic theory back to the center of a unified cognitive neuroscience of lan- guage.
Article
Anatomy Matters considers the shared concerns of cognitive science and linguistics, especially with respect to notions of modularity. In order to encourage interdisciplinary exploration of language phenomena, and a concomitant notion of "degrees of modularity" in the Jackendoff sense, the neurobiological basis of language and the emergence of language ability in the species is discussed. In humans (but not in other primates) the parietal-occipital-temporal junction with its connection to Broca's area yields a plausible biological basis for Conceptual Structure, understood to be composed of hierarchically arranged, abstract meaning primitives derived from sensory perception. Consideration of the neuroanatomy, especially comparative primate neuroanatomy, leads to the expectation that evolutionary genetics must explain how the brain takes the necessary anatomical structure. The centrality of Conceptual Structure to these various concerns then connects to the question of whether linguists' notions of modularity entail a realistic theory of mind.
Article
Universal Grammar offers a set of hypotheses about the biases children bring to language-learning. But testing these hypotheses is difficult, particularly if we look only at language-learning under typical circumstances. Children are influenced by the linguistic input to which they are exposed at the earliest stages of language-learning. Their biases will therefore be obscured by the input they receive. A clearer view of the child's preparation for language comes from observing children who are not exposed to linguistic input. Deaf children whose hearing losses prevent them from learning the spoken language that surrounds them, and whose hearing parents have not yet exposed them to sign language, nevertheless communicate with the hearing individuals in their worlds and use gestures, called homesigns, to do so. This article explores which properties of Universal Grammar can be found in the deaf children's homesign systems, and thus tests linguistic theory against acquisition data.
Article
In the last fifty or so years, the field of linguistics has become concerned with the study of language as a means for understanding how the mind works. Linguistic theories that advocate the idea that structure-building computations underlie human grammar have been assumed to reveal the same type of computational operations present in theories of other modules of the mind. At the same time, with the emergence of new scientific technical advances, more concrete and tangible light is being shed on how the brain actually operates in terms of both its mechanisms and the loci of activity that correspond to specific functions. In MRI studies of language in use (and other similar studies using different types of techniques), various areas of the brain have been shown to exhibit activity, rather than just one central location, and the idea of language emerging out of a network of interconnected distinct brain circuits or systems has become, for the most part, widely acceptable. Yet, generative grammarians, assuming a level of analysis that is not primarily concerned with such neurological findings, continue to consider language to be an autonomous and homogeneous entity unto itself in the mind, namely, a module, an innate organic whole, the maturational process of which is shaped by triggers in the environment, and which is, in essence, a computational system of combining or merging building block-like elements together in a recursive fashion to form hierarchical structures. This approach appears to be removed from the aforementioned findings regarding the physically scattered locales of language in the brain (but, cf. Marantz’s contribution to this special issue which finds no tension between current findings from brain science and generative linguistic theory).
Article
If the biological basis of language is to provide insight for linguistic theory, description of the aspects of language that play a role in the determination of language lateralization is essential. This article will summarize what is known about the distribution of language across the hemispheres using information from the Wada procedure and comparing those results with those from investigations using newer less invasive methods like fMRI. This article will also describe what is known about the limits of language in the isolated right hemisphere when acquired during normal language development. The profile of language in the isolated right hemisphere may qualify as one model of an evolutionarily older "protolanguage." Questions posed in both of these areas provide a rich opportunity for interaction between linguists, psycholinguists, and neuropsychologists.
Article
This article argues that subject auxiliary inversion in English (SAI) provides an example of a syntactic generalization that is strongly motivated by a family of closely related functions. Recognition of the functional properties of each subconstruction associated with SAI allows us to predict many seemingly arbi- trary properties of SAI: e.g., its (partial) restriction to appear in main clauses, the fact that the inversion only involves the first auxiliary, and the fact that its use in comparatives is more limited. The dominant feature of SAI, being non- positive,is also arguedto motivate the syntacticform of SAI. It is suggestedthat attention to the rich data inherent in language and to findings in categorization research simultaneously serves to reinforce and benefit our understanding of both language and categorization more generally.
Article
This article examines a type of argument for linguistic nativism that takes the following form: (i) a fact about some natural language is exhibited that al- legedly could not be learned from experience without access to a certain kind of (positive) data; (ii) it is claimed that data of the type in question are not found in normal linguistic experience; hence (iii) it is concluded that people cannot be learning the language from mere exposure to language use. We ana- lyze the components of this sort of argument carefully, and examine four exem- plars, none of which hold up. We conclude that linguists have some additional work to do if they wish to sustain their claims about having provided support for linguistic nativism, and we offer some reasons for thinking that the relevant kind of future work on this issue is likely to further undermine the linguistic nativist position.
Article
The question of whether generative grammar offers insights into the mind turns on whether and how a generative grammar is an account of what is in the mind. A potentially useful perspective on this question can be achieved by looking at a cognitive phenomenon that is similar in many respects to language but crucially different, namely jazz. Jazz performance is apparently rule- governed and improvisational, like language. It is useful to take jazz as the exemplar of a complex cognitive task and to think of language as different from jazz in several critical respects that may account for some of its special design features, as well as the fact that it is acquired naturally and without explicit instruction. One difference that may have considerable explanatory force is that language is used to encode and communicate Conceptual Structure, while in the case of jazz (and music in general), what is communicated is the form itself.
Article
This chapter presents a discussion on the syntax-semantics interface. The task of a theory of semantic interpretation is to characterize how elements in a syntactic string semantically relate to one another. Clearly, this depends on how one conceptualizes the meaning of the elementary building blocks-that is, the terminal nodes of a syntactic tree. Interpreting an expression typically requires integrating it into an evolving discourse model. Frequently, this requires resolving ambiguities at conceptually distinct levels, fixing reference, and drawing inferences to align local and global aspects of the discourse. To fully accomplish this task, it is uncontroversial that, in addition to lexical and syntactic constraints, comprehenders must draw upon pragmatic knowledge. The importance of high-level constraints has been illustrated by the finding that comprehenders sometimes adopt a pragmatically plausible interpretation even if it is incongruent with lexical and syntactic constraints.
Article
In the 1980s, Charles Clifton referred to a "psycholinguistic renaissance" in cognitive science. During that time, there was almost unanimous agreement that any self-respecting psycholinguist would make sure to keep abreast of ma- jor developments in generative grammar, because a competence model was es- sential, and the linguistic theory was the proper description of that competence. But today, many psycholinguists are disenchanted with generative grammar. One reason is that the Minimalist Program is difficult to adapt to processing models. Another is that generative theories appear to rest on a weak empirical foundation, due to the reliance on informally gathered grammaticality judg- ments. What can be done to remedy the situation? First, formal linguists might follow Ray Jackendoff's recent suggestion that they connect their work more closely to research in the rest of cognitive science. Second, syntactic theory should develop a better methodology for collecting data about whether a sen- tence is good or bad. A set of standards for creating examples, testing them on individuals, analyzing the results, and reporting findings in published work should be established. If these two ideas were considered, linguistic develop- ments might once again be relevant to the psycholinguistic enterprise.
Article
The "New Synthesis" in cognitive science is committed to the computational theory of mind (CTM), massive modularity, nativism, and adaptationism. In The mind doesn't work that way , Jerry Fodor argues that CTM has problems explaining abductive or global inference, but that the New Synthesis offers no solution, since massive modularity is in fact incompatible with global cognitive processes. I argue that it is not clear how global human mentation is, so whether CTM is imperiled is an open question. Massive modularity also lacks some of the invidious commitments Fodor ascribes to it. Furthermore, Fodor's anti-adaptationist arguments are in tension with his nativism about the contents of modular systems. The New Synthesis thus has points worth preserving.
Article
Generative (Chomskyan) linguistics has a number of conceptual problems im- peding its acceptance into modern cognitive science. A major issue centers on Uniformitarianism, the assumption that the supposed innate core of language - Universal Grammar - has remained the same since its instantaneous origin and remains basically the same (exhibits "continuity") throughout ontogeny. Other problems, noted by George Miller, are emphasis on structure rather than function, on competence rather than performance, as well as the tendency to regard simplifications as explanations. We illustrate these problems in the do- main of speech acquisition, which, rather than exhibiting continuity, involves a progression from a syllable reduplication mode to an opposite syllable var- iegation mode. We present an alternative Neodarwinian conceptualization - the Frame/Content theory - in which the time domain is central for both phy- logeny and ontogeny. According to this theory the reduplication-to-variegation progression in the ontogeny of speech (from syllabic Frames to segmental Con- tent) is considered to recapitulate its phylogeny.