Article

The Universal Generative Faculty: The source of our expressive power in language, mathematics, morality, and music

Authors:
  • Risk-Eraser, LLC
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

(in press) Journal of Neurolinguistics, Special Issue " Language evolution: on the origin of the lexical and syntactic structures " Many have argued that the expressive power of human thought comes from language. Language plays this role, so the argument goes, because its generative computations construct hierarchically structured, abstract representations, covering virtually any content and communicated in linguistic expressions. However, language is not the only domain to implement generative computations and abstract representations, and linguistic communication is not the only medium of expression. Mathematics, morality, and music are three others. These similarities are not, we argue, accidental. Rather, we suggest they derive from a common computational system that we call the Universal Generative Faculty or UGF. UGF is, at its core, a suite of contentless generative procedures that interface with different domains of knowledge to create contentful expressions in thought and action. The representational signatures of different domains are organized and synthesized by UGF into a global system of thought. What was once considered the language of thought is, on our view, the more specific operation of UGF and its interfaces to different conceptual domains. This view of the mind changes the conversation about domain-specificity, evolution, and development. On domain-specificity, we suggest that if UGF provides the gener-ative engine for different domains of human knowledge, then the specificity of a given domain (e.g., language, mathematics, music, morality) is restricted to its repository of primitive representations and to its interfaces with UGF. Evolutionarily, some generative computations are shared with other animals (e.g., combinatorics), both for recognition-learning and generation-production, whereas others are uniquely human (e.g., recursion); in some cases, the cross-species parallels may be restricted to recognition-learning, with no observable evidence of generation-production. Further, many of the differences observed between humans and other animals, as well as among nonhuman animals, are the result of differences in the interfaces: whereas humans promiscuously traverse (consciously and unconsciously) interface conditions so as to combine and analogize concepts across many domains, nonhuman animals are far more limited , often restricted to a specific domain as well as a specific sensory modality within the domain. Developmentally, the UGF perspective may help explain why the generative powers of different domains appear at different stages of development. In particular, because UGF must interface with domain-specific representations, which develop on different time scales, the generative power of some domains may mature more slowly (e.g., mathematics) than others (e.g., language). This explanation may also contribute to a deeper understanding of cross-cultural differences among human populations , especially cases where the generative power of a domain appears absent (e.g., cultures with only a few count words). This essay provides an introduction to these ideas, including a discussion of implications and applications for evolutionary biology, human cognitive development, cross-cultural variation, and artificial intelligence. omain-specificity | evolution | generative functions | language faculty | re-cursion | Turing machine | Universal Generative Faculty The ideas developed in this essay grow out of several different intellectual traditions within the formal and cogni-tive sciences. Broadly speaking, we are interested in what enables human minds to generate a limitless range of ideas and expressions across many different domains of knowledge. To what extent is this facility enabled by domain-general or domain-specific mechanisms? To what extent are these facilities shared with other organisms and to what extent are they uniquely human? To what extent are the generative mechanisms that operate in different domains of knowledge the same or different, and why? What accounts for the developmental timing and maturation of different domains of knowl-edge? And could the creative, generative power of human intelligence be realized in computing machinery? This essay provides an introductory sketch of an idea that, we believe, helps shed new light on these fundamental questions. Different traditions of thought One tradition that not only launched many of the questions noted above, but developed a significant position on the answers, is Chomsky's (1955; 1995) work in linguistics, and the nature of mind more generally. The argument, in brief, is that humans are endowed with a finite cognitive computational system that generates an infinity of meaningful expressions. This is a linguistic system or faculty, with unique — specific to our species and the domain of language — recursive procedures that interface with both the conceptual-intentional (semantics/pragmatics) and sensory-motor (phonology/phonetics) systems to generate hierarchically structured representations. This intensional system — I-language — is internal to an individual, and is often described as forming a language of thought. The sets of expressions this system enumerates have been described (not by Chomsky) as E-languages (e.g., English, French, Japanese, etc.). Based on Chomsky's linguistic framework, some have argued that language enables the expressive power of all other domains, and in many cases, provides the cognitive glue across domains. Thus, for example, Spelke (2016) has argued that what enables us to integrate different domains or modules of thought, including aspects of space and number, is language. In a classic set of experiments (Hermer and Spelke, 1994; 1996) on spatial reorientation following disorientation, young children appear incapable of integrating information about landmarks with information about the geometry of the space, a result that parallels those originally carried out on rats (Cheng, 1986). Such integration only occurs when children acquire spatially-relevant words (e.g., right of, in front of), the linguistic glue that integrates information from the landmark and geometry systems. Moreover, the flawless performance of adults was reduced to that of young children and rats when they were required to carry out a verbal shadowing task, one that effectively blocks access to the language faculty. This perspective sets up language as both the generative machinery of thought and the system that enables interfaces across domains. Similar ideas have inspired models of artificial intelligence in which the human-like AI understands the world by using linguistic machinery to combine commonsense knowledge with perceptual (particularly visual) representations in the form of explanatory " stories " (Winston, 2012).

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This is what is also supposed to support linguistic creativity which allows humans to produce an unbounded number of expressions from a bounded stock of linguistic materials such as words (Chomsky, 1995(Chomsky, , 2000. This may be considered a special and unique cognitive capacity, and additionally, many general facets of this linguistic capacity are believed to be shared with other cognitive capacities such mathematical cognition and musical cognition (Hauser & Watumull, 2017). Undoubtedly, this adds to the richness of the level of cognitive organization responsible for linguistic productivity and also for productivity in nonlinguistic cognitive domains which (may) have borrowed the essential pattern of operations from language. ...
... If so, even a process-oriented neurobiological account of language is going to be eventually inadequate. That is because no information transfer can take place between the neurobiological organization and the mental system of language, given that the brain is a concrete finite system, while the system of language (as characterized or presupposed, for instance, in Chomsky, 2000;Hauser & Watumull, 2017) is highly abstract (beyond space and time) and also facilitates the creation of potentially unbounded structures. 3 Regardless of whether or not one presupposes a dualist stance on the required information transfer, it is hard to see how the brain, which is a finite system, can transfer information to, and also receive information from, an abstract and extra-spatio-temporal system of language (that abstracts away from real time linguistic processing/performance). ...
Article
Full-text available
The biological foundations of language reflect assumptions about the way language and biology relate to one another, and with the rise of biological studies of language, we appear to have come closer to a deep understanding of linguistic cognition–the part of cognition constituted by language. This article argues that relations of neurobiological and genetic instantiation between linguistic cognition and the underlying biological substrate are ultimately irrelevant to understanding the higher-level structure and form of language. Linguistic patterns and those that make up the character of cognition constituted by language do not simply arise from the biological substrate because higher-level structures typically assume forms based on constraints that only emerge once these new levels are constructed. The goal is not to show how the mapping problem between linguistic cognition and neurobiology can be solved. Rather, the goal is to show the mapping problem ceases to exist once a different understanding of language-(neuro)biology relations is embraced. With this goal, this article first uncovers a number of logical and conceptual fallacies in strategies deployed in understanding language-(neuro)biology relations. After having shown these flaws, the article offers an alternative view of language-biology relations that shows how biological constraints shape language (nature and form), making it what it is.
... Understanding how humans and other animals encode and represent temporal sequences has recently emerged as a crucial issue in the study of comparative cognition, as it allows a direct comparison between species and therefore a test of theories of human uniqueness [4,5]. Recursive phrase structures have been proposed to lie at the core of the human language faculty [6], and a competence for nested trees has been postulated to underlie several other human cognitive abilities such as mathematics or music [4,[7][8][9]. According to a recent review [4], non-human animals may encode sequences using a variety of encoding schemes, including transition probabilities, ordinal regularities (what comes first, second, etc.), recurring chunks, and algebraic patterns [10][11][12][13][14]. ...
... According to a recent review [4], non-human animals may encode sequences using a variety of encoding schemes, including transition probabilities, ordinal regularities (what comes first, second, etc.), recurring chunks, and algebraic patterns [10][11][12][13][14]. However, several authors hypothesize that only humans have access to a language-like representation of nested trees [4,8], also being described as a "universal generative faculty" [9] or "language of thought" [15] capable of encoding arbitrarily nested rules. ...
Article
Full-text available
Working memory capacity can be improved by recoding the memorized information in a condensed form. Here, we tested the theory that human adults encode binary sequences of stimuli in memory using an abstract internal language and a recursive compression algorithm. The theory predicts that the psychological complexity of a given sequence should be proportional to the length of its shortest description in the proposed language, which can capture any nested pattern of repetitions and alternations using a limited number of instructions. Five experiments examine the capacity of the theory to predict human adults’ memory for a variety of auditory and visual sequences. We probed memory using a sequence violation paradigm in which participants attempted to detect occasional violations in an otherwise fixed sequence. Both subjective complexity ratings and objective violation detection performance were well predicted by our theoretical measure of complexity, which simply reflects a weighted sum of the number of elementary instructions and digits in the shortest formula that captures the sequence in our language. While a simpler transition probability model, when tested as a single predictor in the statistical analyses, accounted for significant variance in the data, the goodness-of-fit with the data significantly improved when the language-based complexity measure was included in the statistical model, while the variance explained by the transition probability model largely decreased. Model comparison also showed that shortest description length in a recursive language provides a better fit than six alternative previously proposed models of sequence encoding. The data support the hypothesis that, beyond the extraction of statistical knowledge, human sequence coding relies on an internal compression using language-like nested structures.
... If someone were to say that the waggle dance of the honeybee 'may' depend on the laws of motion, no one would pay attention. A recent comparative study of cross-species generative systems asserts that nonhuman animals have nothing resembling human recursive syntax[7]. While many animal species recognize statistical-probabilistic sequences, linear associations, or even algebraic rules, only humans appear capable of internalizing generative algorithms. ...
... Moro has shown that the superficial parallels here between action sequences and sentences are misguided, again essentially backwards[9,10]. Selfreference, a defining property of recursion, appears to be absent from the domain of motor action and spatiotemporal imagination of nested maps[7], yet a rich part of human language. ...
Article
Unraveling the evolution of human language is no small enterprise. One could start digging somewhere in the largely unobservable past, working forwards to the present, hoping to surface in the right spot. Alternatively, one could start with the currently observed and well-established properties of human language, the phenotype of language, and work backwards, with these ‘knowns’ guiding the search for otherwise speculative historical ‘unknowns’. In a recent issue of Trends in Cognitive Sciences, Corballis [1] appears confident that only the first strategy will serve. Evolutionary explanations necessarily are historical, but few evolutionary biologists faced with such a paucity of historical evidence would forge ahead without first defining what, exactly, the phenotype is that ultimately evolved [2]. Yet, Corballis criticizes what we actually know about the human language phenotype, because it does not conform to his speculations [3]. We believe that Corballis’ odd research inversion suffers from misconceptions regarding what we know about both language and evolution.
... In all of these studies, the observed changes are bilateral, extended, and go beyond the language network per se. Such an extended network does not fit with the hypothesis that a single localised system, such as natural language or a universal generative faculty, is the primary engine of all humanspecific abstract symbolic abilities (Hauser and Watumull, 2017;Spelke, 2003). Rather, our results suggest that multiple parallel and partially dissociable human brain networks possess symbolic abilities and deploy them in different domains such as natural language, music and mathematics (Amalric and Dehaene, 2017;Chen et al., 2021;Dehaene et al., 2022;Fedorenko et al., 2011;Fedorenko and Varley, 2016). ...
Article
Full-text available
The emergence of symbolic thinking has been proposed as a dominant cognitive criterion to distinguish humans from other primates during hominisation. Although the proper definition of a symbol has been the subject of much debate, one of its simplest features is bidirectional attachment: the content is accessible from the symbol, and vice versa. Behavioural observations scattered over the past four decades suggest that this criterion might not be met in non-human primates, as they fail to generalise an association learned in one temporal order (A to B) to the reverse order (B to A). Here, we designed an implicit fMRI test to investigate the neural mechanisms of arbitrary audio–visual and visual–visual pairing in monkeys and humans and probe their spontaneous reversibility. After learning a unidirectional association, humans showed surprise signals when this learned association was violated. Crucially, this effect occurred spontaneously in both learned and reversed directions, within an extended network of high-level brain areas, including, but also going beyond, the language network. In monkeys, by contrast, violations of association effects occurred solely in the learned direction and were largely confined to sensory areas. We propose that a human-specific brain network may have evolved the capacity for reversible symbolic reference.
... Kinsella (2009), through studies of human cognitive systems, animal cognitive systems, and nonhuman communicative systems, found that recursion is widely present in numerical, cognitive and communicative domains such as navigation, music, and games. Hauser and Watumull (2017) study also found that recursive operations exist in language, mathematics, music, and moral concepts, and proposed the idea of Universal Generative Faculty (UGF), according to which a set of content-free Specific content generative procedures interacting with different knowledge domains can produce expressions with content in actions and thoughts ( Figure 1). ...
... The computational algorithm employed in these patterns is based on the principle of recursion: the generation of hierarchically build tree structures of increasing complexity out of discrete elements (like phonemes, words or word groupings), into an expressive composition (the syntax) that is understandable to others sharing similar knowledge (Zuidema et al., 2018). Marc Hauser and Jeffrey Watumull argued that this algorithm of recursion might underly a more universal generative faculty, realizing not only the building of syntax in language and music, but also other human singularities like mathematics and morality (Hauser and Watumull, 2017). This model gained additional significance when it could be shown that lesions in the so-called area of Broca lead to language disorders with agrammatism (Friederici, 2023). ...
Article
Full-text available
The processing of information in neural networks is basically determined by oscillatory activity. In this paper rhythm in music is discussed as a phenomenon that is processed in neural networks in a comparable manner to the information it provides. Rhythm in music can therefore lend itself to experimental research comparing information processing in biological versus artificial neural networks.
... Although non-symbolic numerosity may be perceived based on an evolutionarily old brain circuit (Nieder, 2020), no other animal species can perform a recursive combination of symbolic numbers, variables, and operators to construct infinitely complex expressions. For example, the arithmetic expression "3 + 5" can be recursively merged with another multiplication operator to produce a (syntactically) more complex expression: "(3 + 5) × 2." Theoretical linguists argue that recursive computation in natural language also provides a basis for the natural number system (Hauser, Chomsky, & Tecumseh Fitch, 2002;Chomsky, 2008), which is consistent with the concept of universal generative faculty shared by language, mathematics, music, and morality (Hauser & Watumull, 2017;Fujita & Fujita, 2021). However, this highlights the need to develop a common theoretical foundation for such different cognitive domains. ...
Article
Full-text available
... In all of these studies, the observed changes are bilateral, extended, and go beyond the language network per se. Such an extended network does not fit with the hypothesis that a single localised system, such as natural language or a universal generative faculty, is the primary engine of all humanspecific abstract symbolic abilities (Hauser and Watumull, 2017;Spelke, 2003). Rather, our results suggest that multiple parallel and partially dissociable human brain networks possess symbolic abilities and deploy them in different domains such as natural language, music and mathematics (Amalric and Dehaene, 2017;Chen et al., 2021;Dehaene et al., 2022;Fedorenko et al., 2011;Fedorenko and Varley, 2016). ...
Preprint
The emergence of symbolic thinking has been proposed as a dominant cognitive criterion to distinguish humans from other primates during hominization. Although the proper definition of a symbol has been the subject of much debate, one of its simplest features is bidirectional attachment: the content is accessible from the symbol, and vice versa. Behavioral observations scattered over the past four decades suggest that this criterion might not be met in non-human primates, as they fail to generalize an association learned in one temporal order (A to B) to the reverse order (B to A). Here, we designed an implicit fMRI test to investigate the neural mechanisms of arbitrary audio-visual and visual-visual pairing in monkeys and humans and probe their spontaneous reversibility. After learning a unidirectional association, humans showed surprise signals when this learned association was violated. Crucially, this effect occurred spontaneously in both learned and reversed directions, within an extended network of high-level brain areas, including, but also going beyond the language network. In monkeys, by contrast, violations of association effects occurred solely in the learned direction and were largely confined to sensory areas. We propose that a human-specific brain network may have evolved the capacity for reversible symbolic reference.
... In all of these studies, the observed changes are bilateral, extended, and go beyond the language network per se. Such an extended network does not fit with the hypothesis that a single localized system, such as natural language or a universal generative faculty, is the primary engine of all human-specific abstract symbolic abilities (Hauser and Watumull, 2017 ;Spelke, 2003 ). Rather, our results suggest that multiple parallel and partially dissociable human brain networks possess symbolic abilities and deploy them in different domains such as natural language, music and mathematics (Amalric and Dehaene, 2017 ;Chen et al., 2021 ;Fedorenko et al., 2011 ;Fedorenko and Varley, 2016 ). ...
Preprint
Full-text available
The emergence of symbolic thinking has been proposed as a dominant cognitive criterion to distinguish humans from other primates during hominization. Although the proper definition of a symbol has been the subject of much debate, one of its simplest features is bidirectional attachment: the content is accessible from the symbol, and vice versa. Behavioral observations scattered over the past four decades suggest that this criterion might not be met in non-human primates, as they fail to generalize an association learned in one temporal order (A to B) to the reverse order (B to A). Here, we designed an implicit fMRI test to investigate the neural mechanisms of arbitrary audio-visual and visual-visual pairing in monkeys and humans and probe their spontaneous reversibility. After learning a unidirectional association, humans showed surprise signals when this learned association was violated. Crucially, this effect occurred spontaneously in both learned and reversed directions, within an extended network of high-level brain areas, including, but also going beyond the language network. In monkeys, by contrast, violations of association effects occurred solely in the learned direction and were largely confined to sensory areas. We propose that a human-specific brain network may have evolved the capacity for reversible symbolic reference.
... Despite some recent claims that "there are no traits present in humans and absent in other animals that in isolation explain our species' superior cognitive performance", and that humans are simply "flexible cognitive allrounders" (Laland and Seed, 2021, p. 689), amongst other claims that language cannot even be given a biological account (Smit, 2022), many linguists maintain, as I will here, that this capacity for constructing hierarchical syntactic objects and assigning them a categorized, labeled identity is human-specific, even if the generic facility for recursion might be shared with other species (Liao et al., 2022). Furthermore, this capacity for hierarchical recursion has been linked to humanspecific cognitive superiority (Dehaene et al., 2022;Hauser and Watumull, 2017). ...
Article
Full-text available
A comprehensive model of natural language processing in the brain must accommodate four components: representations, operations, structures and encoding. It further requires a principled account of how these components mechanistically, and causally, relate to each another. While previous models have isolated regions of interest for structure-building and lexical access, many gaps remain with respect to bridging distinct scales of neural complexity. By expanding existing accounts of how neural oscillations can index various linguistic processes, this article proposes a neurocomputational architecture for syntax, termed the ROSE model (Representation, Operation, Structure, Encoding). Under ROSE, the basic data structures of syntax are atomic features, types of mental representations (R), and are coded at the single-unit and ensemble level. Elementary computations (O) that transform these units into manipulable objects accessible to subsequent structure-building levels are coded via high frequency gamma activity. Low frequency synchronization and cross-frequency coupling code for recursive categorial inferences (S). Distinct forms of low frequency coupling and phase-amplitude coupling (delta-theta coupling via pSTS-IFG; theta-gamma coupling via IFG to conceptual hubs) then encode these structures onto distinct workspaces (E). Causally connecting R to O is spike-phase/LFP coupling; connecting O to S is phase-amplitude coupling; connecting S to E is a system of frontotemporal traveling oscillations; connecting E to lower levels is low-frequency phase resetting of spike-LFP coupling. ROSE is reliant on neurophysiologically plausible mechanisms, is supported at all four levels by a range of recent empirical research, and provides an anatomically precise and falsifiable grounding for the basic property of natural language syntax: hierarchical, recursive structure-building.
... Hierarchical behavior, that is, the perception and production of hierarchical structures, differs from hierarchical cognitive mechanisms and mental representations. The fact that hierarchical behavior is seen across multiple domains is uncontroversial (Corballis, 2014;Fischmeister, Martins, Beisteiner, & Fitch, 2017;Fitch, 2014;Hauser & Watumull, 2017;Truswell, 2017;Vyshedskiy, 2019); the claim that hierarchical cognitive mechanisms underlie this behavior is not (Frank, Bod, & Christiansen, 2012;Lobina, 2014). Indeed, hierarchical behavior may arise from non-hierarchical cognitive mechanisms like statistical learning (Camp, 2009;Rey, Perruchet, & Fagot, 2012;Santolin & Saffran, 2018) or ordinal reasoning (D'amato & Colombo, 1990;McGonigle & Chalmers, 1977;Orlov, Yakovlev, Hochstein, & Zohary, 2000;Terrace & McGonigle, 1994). ...
Article
Full-text available
Hierarchical cognitive mechanisms underlie sophisticated behaviors, including language, music, mathematics, tool‐use, and theory of mind. The origins of hierarchical logical reasoning have long been, and continue to be, an important puzzle for cognitive science. Prior approaches to hierarchical logical reasoning have often failed to distinguish between observable hierarchical behavior and unobservable hierarchical cognitive mechanisms. Furthermore, past research has been largely methodologically restricted to passive recognition tasks as compared to active generation tasks that are stronger tests of hierarchical rules. We argue that it is necessary to implement learning studies in humans, non‐human species, and machines that are analyzed with formal models comparing the contribution of different cognitive mechanisms implicated in the generation of hierarchical behavior. These studies are critical to advance theories in the domains of recursion, rule‐learning, symbolic reasoning, and the potentially uniquely human cognitive origins of hierarchical logical reasoning.
... The human aptitude for syntax, they claim, is based on that propensity which is also at work in mathematics and in the planning of motor acts. Hauser and Watumull (2017) also suggest dispensing with the notion of syntactic uniqueness. They state that language, mathematics, music, and even morality, all are computations implemented by a contentless faculty. ...
... Hauser, Chomsky, and Fitch [13] famously proposed that the capacity to form recursive representations is absent in other animals and lies at the core of the human language faculty. The proposal was later extended to suggest that a competence for the mental representation and manipulation of nested tree structures, called dendrophilia [14], universal generative faculty [137], or recursive mental programs [138,139], underlies the singularity of the human mind in all cognitive domains [15]. ...
Article
Natural language is often seen as the single factor that explains the cognitive singularity of the human species. Instead, we propose that humans possess multiple internal languages of thought, akin to computer languages, which encode and compress structures in various domains (mathematics, music, shape…). These languages rely on cortical circuits distinct from classical language areas. Each is characterized by: (i) the discretization of a domain using a small set of symbols, and (ii) their recursive composition into mental programs that encode nested repetitions with variations. In various tasks of elementary shape or sequence perception, minimum description length in the proposed languages captures human behavior and brain activity, whereas non-human primate data are captured by simpler nonsymbolic models. Our research argues in favor of discrete symbolic models of human thought.
... As for music and language, it seems to me far more reasonable to suppose that music (to the very limited extent that it involves recursive computation) was exapted from language than the converse. Or, perhaps, as suggested by Jeffrey Watumull and Marc Hauser in recent work (Hauser & Watumull 2016), that recursive computation emerged and was applied in cognitive systems of digital infinity, language and arithmetic, maybe music. ...
... In a recent review, we distinguished the following five levels of sequence knowledge with increasing degrees of abstraction: transition and timing knowledge, chunking, ordinal knowledge, algebraic patterns, and nested tree structures generated by symbolic rules . We proposed that only humans possess a representation of nested tree structures, also described as a "universal generative faculty" (Hauser and Watumull, 2017) or "language of thought" (Fodor, 1975), which enables sequence encoding by "compressing" information using abstract rules. By contrast, macaque monkeys are thought to be more limited in their ability to spontaneously detect relational structures between items and compress sequence memory using an internal language. ...
Article
Full-text available
Sequence learning is a ubiquitous facet of human and animal cognition. Here, using a common sequence reproduction task, we investigated whether and how the ordinal and relational structures linking consecutive elements are acquired by human adults, children, and macaque monkeys. While children and monkeys exhibited significantly lower precision than adults for spatial location and temporal order information, only monkeys appeared to exceedingly focus on the first item. Most importantly, only humans, regardless of age, spontaneously extracted the spatial relations between consecutive items and used a chunking strategy to compress sequences in working memory. Monkeys did not detect such relational structures, even after extensive training. Monkey behavior was captured by a conjunctive coding model, whereas a chunk-based conjunctive model explained more variance in humans. These age- and species-related differences are indicative of developmental and evolutionary mechanisms of sequence encoding and may provide novel insights into the uniquely human cognitive capacities. SIGNIFICANCE STATEMENT Sequence learning, the ability to encode the order of discrete elements and their relationships presented within a sequence, is a ubiquitous facet of cognition among humans and animals. By exploring sequence-processing abilities at different human developmental stages and in nonhuman primates, we found that only humans, regardless of age, spontaneously extracted the spatial relations between consecutive items and used an internal language to compress sequences in working memory. The findings provided insights into understanding the origins of sequence capabilities in humans and how they evolve through development to identify the unique aspects of human cognitive capacity, which includes the comprehension, learning, and production of sequences, and perhaps, above all, language processing.
... En cuanto a la música y el lenguaje, me parece mucho más razonable suponer que la música en la medida muy limitada en que ella involucra (un) cómputo recursivo era extraída del lenguaje y no lo contrario. O, tal vez, como lo sugirieron Jeffrey Watumull y Marc Hauser en un trabajo reciente (Hauser & Watumull, 2016), que el cómputo recursivo surgió y se aplicó en sistemas cognitivos de infinito digital, lenguaje y aritmética, tal vez música. ¿Qué pasa con las etiquetas, la Etiqueta? ...
Article
Full-text available
Artículo original titulado «50 Years Later: A Conversation about the Biological Study of Language with Noam Chomsky», publicado en la sección Forum de la revista Biolinguistics, 11, 2017, SI: 487–499, http://www.biolinguistics.eu. Entrevista traducida del inglés por Miguel Ángel Mahecha Bermúdez y Rubén Arboleda Toro.
... Similar views are not new at all. Hauser and Watumull (2017) propose the "Universal Generative Faculty (UGF)" as the domain-general generative engine shared by, e.g., language, mathematics, music and morality. Marcus (2006) already argued that "descent-with-modification modularity," as opposed to "sui generis modularity," is the right kind of modularity to understand the domain-specific nature of cognitive modules. ...
Article
Full-text available
Human language is a multi-componential function comprising several sub-functions each of which may have evolved in other species independently of language. Among them, two sub-functions, or modules, have been claimed to be truly unique to the humans, namely hierarchical syntax (known as “Merge” in linguistics) and the “lexicon.” This kind of species-specificity stands as a hindrance to our natural understanding of human language evolution. Here we challenge this issue and advance our hypotheses on how human syntax and lexicon may have evolved from pre-existing cognitive capacities in our ancestors and other species including but not limited to nonhuman primates. Specifically, we argue that Merge evolved from motor action planning, and that the human lexicon with the distinction between lexical and functional categories evolved from its predecessors found in animal cognition through a process we call “disintegration.” We build our arguments on recent developments in generative grammar but crucially depart from some of its core ideas by borrowing insights from other relevant disciplines. Most importantly, we maintain that every sub-function of human language keeps evolutionary continuity with other species’ cognitive capacities and reject a saltational emergence of language in favor of its gradual evolution. By doing so, we aim to offer a firm theoretical background on which a promising scenario of language evolution can be constructed.
... Rather, linguistic cognition also comprises a complex ensemble of representational properties, structures and relations from which systematic patterns of logical inferences and principles can be derived. Also, it may be recognized that the level of cognitive organization of language as characterized or presupposed, for instance, in Chomsky (2000), Hauser & Watumull (2017) is highly abstract (beyond space and time) and also facilitates the creation of potentially unbounded structures. Therefore, the system of language in the mind, in permitting unbounded structures, turns out to be of such a distinct logical type that it remains unclear how it can even be implemented in the finite neuronal structures allowing for only finite processes (see also Postal 2009). of natural language(s) as those that do not reside in biological entities or structures per se because they arise only when the brain extends to connect to the outer world consisting of language users, objects, events, processes etc., thereby providing the scaffolding for such otherwise biologically meaningless symbolic patterns. ...
Article
Full-text available
This article will present a critique of the neurocentric view of language and cognition by locating it within the context of unification in cognitive science. While unity consists in the integration of the constraints, contents, and operations of various levels or scales of organization of the cognitive system, it contrasts with disunity. Disunity emanates from variations in structure and content at any level of the cognitive system that gives rise to significant and often unique differences in experience, appearance, form, and organization of a cognitive phenomenon at the given level. This happens when the given level is looked at in greater detail. For instance, the gap in the organizational character between a cognitive schema for reasoning how and whether to travel and its account in terms of neuronal activation patterns reflects disunity. Many neurobiological accounts of language aim at the integration of the cognitive organization of language with the neuronal structures at bottom in order to achieve unity, but disunity arises from the special nature of the symbolic/cognitive properties of natural language which are argued to reside neither in the brain nor in the environment alone most plausibly because they are emergent patterns between designated brain states and various kinds of linguistic experience. The proposal that is advanced and then defended with special reference to language–biology relations employs Haugeland’s (Behavioral and Brain Sciences, 1978, 1, 215) notion of dimensions and levels, and thereby emphasizes that unity and disunity can coexist in an explanatory union but from different perspectives and orientations.
... As explained in Section 2.2, recursive combination plays an essential role in the construction of hierarchy in language. Recursive combination is rare in animal behavior but ubiquitous in human behavior, nor is it limited to language but occurs in music, mathematics, object manipulation, and planning, and so on (Greenfield 1991, Conway and Christiansen 2001, Jackendoff 2011, Hauser and Watumull 2017. Clarifying the evolution of recursive combination ability, which we posit as a generalized version of Merge in generative linguistics, is one of the keys to understanding the origins of human language (Hauser et al. 2002). ...
Article
Full-text available
Evolinguistics is an attempt to clarify the origins and evolution of language and communication, thereby deepening our understanding of humans from an evolutionary perspective. The origins of language is characterized by the biological evolution of abilities related to language and communication, and the evolution of language by the structuralization and complexification of language knowledge as well as communication systems through cultural evolution. In Evolinguistics, two idiosyncrasies of human linguistic communication are the primary focus, namely, using hierarchically organized symbol sequences in language and sharing intentions in communication. We believe that the integration of these two characteristics made humans co-creative and smart, and in particular gave us knowledge co-creation capacity. The emergent constructive approach plays an important role in this research, which is a methodology to analyze complex systems by constructing and operating the evolutionary and emergent process of complex phenomena. Two studies taking this approach are introduced in this paper. One is a language evolution experiment in a laboratory to consider the process, mechanisms, and neural basis of symbolic communication systems. The other is an evolutionary simulation of recursive combination, which is thought of as the essential ability to form hierarchical structures. A hypothesis integrating intention sharing and recursive combination is discussed as an abductive reasoning mechanism for understanding others intentions.
... echoic memory, phonological loop), everything else about it is unconscious (syntax and semantics). Chomsky provides no direct empirical support for this "99%" claim, although it is difficult to think of ways to conduct a controlled investigation of language use once one considers the range of cognitive functions narrow syntax might be contributing to (Hauser & Watumull 2017). Dor (2017: 44) even argues that the evolution of language aided our ability to lie possibly more than it aided our ability to communicate: "We evolved for lying, and because of lying, just as much as we evolved for and because of honest communication". ...
Article
Full-text available
In the Minimalist Program, the place of linguistic communication in language evolution and design is clear: It is assumed to be secondary to internalisation. I will defend this position against its critics, and maintain that natural selection played a more crucial role in selecting features of externalization and communication than in developing the computational system of language, following some core insights of Minimalism. The lack of communicative advantages to many core syntactic processes supports the Minimalist view of language use. Alongside the computational system, human language exhibits ostensive-inferential communication via open-ended combinatorial productivity, and I will explore how this system is compatible with-and does not preclude-a Minimalist model of the language system.
... Critically, they do not only have to detect the sameness relation between the last two syllables but also have to associate it with the "correct serial" position (Gervain et al., 2012;Endress et al., 2007). Once a sameness detector is available, it can form associations with representations of sequential positions or other stimuli (Kabdebon & Dehaene-Lambertz, 2019), allowing learners to acquire more complex, composite rules, which is one of the hallmarks of complex cognition (Hauser & Watumull, 2017;Dehaene, Meyniel, Wacongne, Wang, & Pallier, 2015;Corballis, 2014;Fitch & Martins, 2014). ...
Article
Full-text available
Language has a complex grammatical system we still have to understand computationally and biologically. However, some evolutionarily ancient mechanisms have been repurposed for grammar so that we can use insight from other taxa into possible circuit-level mechanisms of grammar. Drawing upon recent evidence for the importance of disinhibitory circuits across taxa and brain regions, I suggest a simple circuit that explains the acquisition of core grammatical rules used in 85% of the world's languages: grammatical rules based on sameness/difference relations. This circuit acts as a sameness detector. “Different” items are suppressed through inhibition, but presenting two “identical” items leads to inhibition of inhibition. The items are thus propagated for further processing. This sameness detector thus acts as a feature detector for a grammatical rule. I suggest that having a set of feature detectors for elementary grammatical rules might make language acquisition feasible based on relatively simple computational mechanisms.
... To account for the findings from How do the present (and previous) cross-domain structural priming effects relate to potential neural architectures and mechanisms within the brain? Indeed, the idea that language and other cognitive domains might share aspects of functional, behavioural, and neuroanatomical organization is by no means a new one (e.g., Piaget, 1955), and some available reports indicate at least some correspondence between language-related brain circuits and those supporting other cognitive operations (Hauser & Watumull, 2017;Lelekov, Franck, Dominey, & Georgieff, 2000;Makuuchi, Gahlman, & Friederici, 2012;Nakai & Okanoya, 2018;Patel, 2003). On the other hand, some of the existing neuroimaging literature suggests a substantial degree of independence between the brain networks responsible for (specifically) mathematical and linguistic cognition (e.g., Amalric & Dehaene, 2016; Fedorenko, Behr, & Kanwisher, 2011;Monti, Parsons, & Osherson, 2012). ...
Article
Full-text available
A number of recent studies found evidence for shared structural representations across different cognitive domains such as mathematics, music, and language. For instance, Scheepers et al. (2011) showed that English speakers’ choices of relative clause (RC) attachments in partial sentences like The tourist guide mentioned the bells of the church that … can be influenced by the structure of previously solved prime equations such as 80–(9+1)×5 (making high RC-attachments more likely) versus 80–9+1×5 (making low RC-attachments more likely). Using the same sentence completion task, Experiment 1 of the present paper fully replicated this cross-domain structural priming effect in Russian, a morphologically rich language. More interestingly, Experiment 2 extended this finding to more complex three-site attachment configurations and showed that, relative to a structurally neutral baseline prime condition, N1-, N2-, and N3-attachments of RCs in Russian were equally susceptible to structural priming from mathematical equations such as 18+(7+(3+11))×2, 18+7+(3+11)×2, and 18+7+3+11×2, respectively. The latter suggests that cross-domain structural priming from mathematics to language must rely on detailed, domain-general representations of hierarchical structure.
... The labelling capacity came first, but did not achieve its full, modern reach until globularisation occurred. This suggests that language-music, languagemathematics and language-morality interfaces (assuming a common computational link between these capacities, à la Hauser & Watumull, 2017) emerged at different evolutionary timepoints and that it may be possible to plot a timeline for the emergence of these interfaces. For instance, we can date musical instruments to around 35kya (such as bone and ivory flutes; Conard et al., 2009); perhaps the language-mathematics interface emerged early in the globularisation process, then the language-morality interface, and finally the language-music interface. ...
Article
Full-text available
Language evolution has long been researched. I will review a number of emerging research directions which arguably have the potential to provide a finer-grained and more structured picture of how and when the capacity for language emerged. In particular, human-specific levels of braincase globularity-and the broader process of self-domestication within which globularity seems capable of being encapsulated-will be argued to be the central pillars of any satisfactory and interdisciplinary model of language evolution.
... The cognitive ability to process and produce both complex hierarchically-structured speech and music is frequently found on a list of traits explaining humans' unique standing among animals (Hauser & Watumull, 2017;Martins, Gingras, Puig-Waldmueller, & Fitch, 2017). Music, like language, is a universal social phenomenon (Blacking, 1973) whose origins remain elusive (Merker, Morley, & Zuidema, 2015) because they are less clearly rooted in biology than is the case for language, for which the selective advantage is apparent (Cross & Morley, 2009). ...
Chapter
Full-text available
Human music, like language, is a cultural universal whose origins remain elusive. Some individuals have a congenital impairment in music processing and singing, which might be biologically rooted. Congenital amusia, a neurodevelopmental disorder, has been studied for around 15 years by only a few research groups worldwide. Although amusia was originally thought of as a music-specific disorder, it was demonstrated relatively quickly that it also affects the perception of speech intonation in laboratory conditions. This finding has spurred a fruitful research program, which has not only contributed to the ongoing debate about modularity by investigating pitch processing in music and speech, but which has also targeted various other aspects of linguistic processing. After a comprehensive introduction into the currently known characteristics of amusia, empirical studies on speech processing in non-tone and tone language speakers diagnosed with amusia will be reviewed. Research on English- and French-speaking amusics has primarily focused on the processing of phonology, linguistic and affective prosody, as well as on verbal memory. Amusics are impaired in several aspects of phonological processing, in differentiating between statements and questions, and further, in perceiving emotional tone. In the last few years, a growing body of research involving tone language speakers has reported impairments in processing of lexical tone and speech intonation among amusics. Hence, research on Mandarin and Cantonese speakers has provided evidence for the notion that amusia is not a disorder specific to non-tone language speakers, and further, it has also increased our knowledge about the core deficits of amusia. Future research will need to replicate current findings in speakers of languages other than those already studied and also clarify how this auditory disorder is linked to other learning disorders.
... But it is worth considering that due to the rational aspect that is innate to the human brain (Hauser and Watumull 2016), seemingly random variations in language are likely to be subject to ad hoc corrections for the sake of clarity and mnemotechnical purposes. In this light some authors have even proposed that there is no clear divide between natural and artificial languages. ...
Article
Full-text available
This contribution discusses the notion of an ideal language and its implications for the development of knowledge organization theory. We explore the notion of an ideal language from both a historical and a formal perspective and seek to clarify the key concepts involved. An overview of some of the momentous attempts to produce an ideal language is combined with an elucidation of the consequences the idea had in modern thought. We reveal the possibilities that the idea opened up and go into some detail to explain the theoretical boundaries it ran into.
... However, this system matures slowly over time and depends on experience with faces as elegantly demonstrated by studies of individuals with early-appearing cataracts that were later removed (Rhodes et al. 2017). A similar characterization applies to language, wherein there are core underlying computations and representations, some specific to language and others shared (Hauser & Watumull 2016), but with experience selecting among the options to generate specific languages (e.g., French, English). ...
Article
Burkart et al.'s impressive synthesis will serve as a valuable resource for intelligence research. Despite its strengths, the target article falls short of offering compelling explanations for the evolution of intelligence. Here, we outline its shortcomings, illustrate how these can lead to misguided conclusions about the evolution of intelligence, and suggest ways to address the article's key questions.
... However, this system matures slowly over time and depends on experience with faces as elegantly demonstrated by studies of individuals with early-appearing cataracts that were later removed (Rhodes et al. 2017). A similar characterization applies to language, wherein there are core underlying computations and representations, some specific to language and others shared (Hauser & Watumull 2016), but with experience selecting among the options to generate specific languages (e.g., French, English). ...
Article
Here, we specifically discuss why and to what extent we agree with Burkart et al. about the coexistence of general intelligence and modular cognitive adaptations, and why we believe that the distinction between primary and secondary modules they propose is indeed essential.
... However, this system matures slowly over time and depends on experience with faces as elegantly demonstrated by studies of individuals with early-appearing cataracts that were later removed (Rhodes et al. 2017). A similar characterization applies to language, wherein there are core underlying computations and representations, some specific to language and others shared (Hauser & Watumull 2016), but with experience selecting among the options to generate specific languages (e.g., French, English). ...
Article
We welcome the cross-disciplinary approach taken by Burkart et al. to probe the evolution of intelligence. We note several concerns: the uses of g and G , rank-ordering species on cognitive ability, and the meaning of general intelligence. This subject demands insights from several fields, and we look forward to cross-disciplinary collaborations.
... However, this system matures slowly over time and depends on experience with faces as elegantly demonstrated by studies of individuals with early-appearing cataracts that were later removed (Rhodes et al. 2017). A similar characterization applies to language, wherein there are core underlying computations and representations, some specific to language and others shared (Hauser & Watumull 2016), but with experience selecting among the options to generate specific languages (e.g., French, English). ...
Article
Full-text available
Are the mechanisms underlying variations in the performance of animals on cognitive test batteries analogous to those of humans? Differences might result from procedural inconsistencies in test battery design, but also from differences in how animals and humans solve cognitive problems. We suggest differentiating associative-based ( learning ) from rule-based ( knowing ) tasks to further our understanding of cognitive evolution across species.
... However, this system matures slowly over time and depends on experience with faces as elegantly demonstrated by studies of individuals with early-appearing cataracts that were later removed (Rhodes et al. 2017). A similar characterization applies to language, wherein there are core underlying computations and representations, some specific to language and others shared (Hauser & Watumull 2016), but with experience selecting among the options to generate specific languages (e.g., French, English). Lastly, it is simply not the case that nonhuman animals are perceived as mere bundles of modules, fixed and inflexible. ...
Article
The goal of our target article was to lay out current evidence relevant to the question of whether general intelligence can be found in nonhuman animals in order to better understand its evolution in humans. The topic is a controversial one, as evident from the broad range of partly incompatible comments it has elicited. The main goal of our response is to translate these issues into testable empirical predictions, which together can provide the basis for a broad research agenda.
... However, this system matures slowly over time and depends on experience with faces as elegantly demonstrated by studies of individuals with early-appearing cataracts that were later removed (Rhodes et al. 2017). A similar characterization applies to language, wherein there are core underlying computations and representations, some specific to language and others shared (Hauser & Watumull 2016), but with experience selecting among the options to generate specific languages (e.g., French, English). ...
Article
Burkart et al. present a paradox – general factors of intelligence exist among individual differences ( g ) in performance in several species, and also at the aggregate level ( G ); however, there is ambiguous evidence for the existence of g when analyzing data using a mixed approach, that is, when comparing individuals of different species using the same cognitive ability battery. Here, we present an empirical solution to this paradox.
... However, this system matures slowly over time and depends on experience with faces as elegantly demonstrated by studies of individuals with early-appearing cataracts that were later removed (Rhodes et al. 2017). A similar characterization applies to language, wherein there are core underlying computations and representations, some specific to language and others shared (Hauser & Watumull 2016), but with experience selecting among the options to generate specific languages (e.g., French, English). ...
Article
The authors evaluate evidence for general intelligence ( g ) in nonhumans but lean heavily toward mammalian data. They mention, but do not discuss in detail, evidence for g in nonmammalian species, for which substantive material exists. I refer to a number of avian studies, particularly in corvids and parrots, which would add breadth to the material presented in the target article.
... However, this system matures slowly over time and depends on experience with faces as elegantly demonstrated by studies of individuals with early-appearing cataracts that were later removed (Rhodes et al. 2017). A similar characterization applies to language, wherein there are core underlying computations and representations, some specific to language and others shared (Hauser & Watumull 2016), but with experience selecting among the options to generate specific languages (e.g., French, English). ...
Article
Across taxonomic subfamilies, variations in intelligence ( G ) are sometimes related to brain size. However, within species, brain size plays a smaller role in explaining variations in general intelligence ( g ), and the cause-and-effect relationship may be opposite to what appears intuitive. Instead, individual differences in intelligence may reflect variations in domain-general processes that are only superficially related to brain size.
... However, this system matures slowly over time and depends on experience with faces as elegantly demonstrated by studies of individuals with early-appearing cataracts that were later removed (Rhodes et al. 2017). A similar characterization applies to language, wherein there are core underlying computations and representations, some specific to language and others shared (Hauser & Watumull 2016), but with experience selecting among the options to generate specific languages (e.g., French, English). ...
Article
Full-text available
Conceptualizing intelligence in its biological context, as the expression of manifold adaptations, compels a rethinking of measuring this characteristic in humans, relying also on animal studies of analogous skills. Mental manipulation , as an extension of object manipulation, provides a continuous, biologically based concept for studying G as it pertains to individual differences in humans and other species.
... Indeed, many animal species can learn experimentally-generated statistical and structural patterns within one modality (ten Cate, 2014;ten Cate & Okanoya, 2012). Although non-human animals are capable of modalityspecific structure learning, discrete/continuous cross-modal mappings, and even second order relational matching (Fagot & Parron, 2010;Smirnova et al., 2015) to date cross-modal structural isomorphisms have only been shown in humans and computersimulated neural networks Dienes, Altmann, Gao, & Goode, 1995;Hauser & Watumull, 2016). ...
Preprint
Full-text available
Natural language syntax can serve as a major test for how to integrate two infamously distinct frameworks: symbolic representations and connectionist neural networks. Building on a recent neurocomputational architecture for syntax (ROSE), I discuss the prospects of reconciling the neural code for hierarchical 'vertical' syntax with linear and predictive 'horizontal' processes via a hybrid neurosymbolic model. I argue that the former can be accounted for via the higher levels of ROSE in terms of vertical phrase structure representations, while the latter can explain horizontal forms of linguistic information via the tuning of the lower levels to statistical and perceptual inferences. One prediction of this is that artificial language models will contribute to the cognitive neuroscience of horizontal morphosyntax, but much less so to hierarchically compositional structures. I claim that this perspective helps resolve many current tensions in the literature. Options for integrating these two neural codes are discussed, with particular emphasis on how predictive coding mechanisms can serve as interfaces between symbolic oscillatory phase codes and population codes for the statistics of linearized aspects of syntax. Lastly, I provide a neurosymbolic mathematical model for how to inject symbolic representations into a neural regime encoding lexico-semantic statistical features.
Preprint
Full-text available
Table des matières : Introduction ; 1. La proposition : des situations aux faits ; 2. La situation, 2.1. La situation partagée, 2.2. Les limites de la situation, 2.3. Les modes d’interprétation des stimuli, 2.4. Perception directe, mémoire et témoignage ; 3. La situation n’est pas un « grand fait » composé de faits plus petits ; 4. Les faits n’existent pas indépendamment de la proposition ; 5. Le fait comme « qualité » d’une situation, 5.1. Le problème des « faits négatifs », 5.2. Analogie, comparaison et métaphore; 6. Il n’y a d’autre situation que la situation actuelle, 6.1. Temps et situation actuelle, 6.2. La situation et les dépendances causales, 6.3. Les faits dans le passé et la situation actuelle, 6.4. Les faits dans le futur et la situation actuelle ; 7. Convergence et cohérence, 7.1. Convergence et pertinence, 7.2. Cohérence et vérité
Article
Los avances en neurociencia y cognición para abordar los desafíos educativos contemporáneos son cruciales. El estudio introduce una novedosa intervención a través de TikTok en 5º de Primaria, utilizando un enfoque interdisciplinar basado en los "lenguajes del cerebro". Incorpora el aprendizaje participativo, la integración musical y el enfoque del lenguaje natural, abarcando la coreografía y la creación de videos. Utilizando metodologías cuantitativas y cualitativas de 101 estudiantes, los resultados indican un impacto positivo en la comprensión de contenidos sobre la Edad Media. Subrayan la eficacia de este enfoque para mejorar la participación de los estudiantes para la educación interdisciplinaria actual.
Article
A comprehensive neural model of language must accommodate four components: representations, operations, structures and encoding. Recent intracranial research has begun to map out the feature space associated with syntactic processes, but the field lacks a unified framework that can direct invasive neural analyses. This article proposes a neurocomputational architecture for syntax, termed ROSE (Representation, Operation, Structure, Encoding). Under ROSE, the basic data structures of syntax are atomic features, types of mental representations (R), and are coded at the single-unit and ensemble level. Operations (O) transforming these units into manipulable objects accessible to subsequent structure-building levels are coded via high frequency broadband γ activity. Low frequency synchronization and cross-frequency coupling code for recursive structural inferences (S). Distinct forms of low frequency coupling encode these structures onto distinct workspaces (E). Causally connecting R to O is spike-phase/LFP coupling; connecting O to S is phase-amplitude coupling; connecting S to E are frontotemporal traveling oscillations. ROSE is reliant on neurophysiologically plausible mechanisms and provides an anatomically precise and falsifiable grounding for natural language syntax.
Article
Este trabajo efectúa un análisis crítico del libro Animal languages, escrito por la filósofa holandesa Eva Meijer. La intención de la autora es mostrar que los animales tienen destrezas cognitivas y comunicativas complejas, pero su estrategia vincula esa complejidad comunicativa con el carácter lingüístico de los códigos comunicativos animales. Más allá de sostener que los animales tienen lenguaje (algo frecuente en etólogos), la obra deja mucho que desear desde la óptica científica. El trabajo mostrará que el libro de Meijer está repleto de contradicciones, exageraciones, afirmaciones extravagantes o infundadas, tergiversaciones de la opinión de otros autores, errores en el uso de referencias bibliográficas o desconocimientos muy graves de diferentes aspectos.
Article
Significance Determining the cognitive differences between human and nonhuman primates is a central goal of cognitive neuroscience. We show that intuitions of geometry are present in humans but absent in baboons. A simple intruder task in which subjects must find which of six geometric shapes is different reveals an effect of geometric regularity in all human groups regardless of age, education, and culture, yet this effect is absent in baboons. Models of the ventral visual pathway for object recognition predict baboons’ performance, but a symbolic model is needed to account for human performance. Our results underline the human propensity for symbolic abstraction, even in an elementary shape perception task, and provide a challenge for neural network models of human shape perception.
Preprint
The capacity to store information in working memory strongly depends upon the ability to recode the information in a compressed form. Here, we tested the theory that human adults encode binary sequences of stimuli in memory using a recursive compression algorithm akin to a “language of thought”, and capable of capturing nested patterns of repetitions and alternations. In five experiments, we probed memory for auditory or visual sequences using both subjective and objective measures. We used a sequence violation paradigm in which participants detected occasional violations in an otherwise fixed sequence. Both subjective ratings of complexity and objective sequence violation detection rates were well predicted by complexity, as measured by minimal description length (also known as Kolmogorov complexity) in the binary version of the “language of geometry”, a formal language previously found to account for the human encoding of complex spatial sequences in the proposed language. We contrasted the language model with a model based solely on surprise given the stimulus transition probabilities. While both models accounted for variance in the data, the language model dominated over the transition probability model for long sequences (with a number of elements far exceeding the limits of working memory). We use model comparison to show that the minimal description length in a recursive language provides a better fit than a variety of previous encoding models for sequences. The data support the hypothesis that, beyond the extraction of statistical knowledge, human sequence coding relies on an internal compression using language-like nested structures.
Article
Full-text available
Language evolution has long been researched. I will review a number of broad, emerging research directions which arguably have the potential to contribute to our understanding of language evolution. Emerging topics in genomics and neurolinguistics are explored, and human-specific levels of braincase globularity – and the broader process of self-domestication within which globularity seems capable of being encapsulated – will be argued to be the central pillars of any satisfactory and interdisciplinary model of language evolution.
Chapter
Full-text available
Resumen El propósito de este trabajo es mostrar la plausibilidad de la tesis acerca de que los seres humanos pensamos en lengua natural. Lejos de suponer que esta postura implicaría un relativismo radical para el funcionamiento de la mente, argumentaremos que es posible sostener un innatismo para los mecanismos del pensamiento -sostenidos por nuestra Facultad del Lenguaje innata- y un relativismo en cuanto a los contenidos idiosincrásicos involucrados en el proceso. Mostraremos que la propuesta de Fodor (1975, 2008) acerca de la existencia de un Mentalés universal -Lenguaje del pensamiento- supone un mecanismo de traducción del pensamiento universal a lenguas particulares que resulta difícil de sostener sin recurrir a motivaciones ad hoc y que, por tanto, aparece como un planteo antieconómico. En la bibliografía encontramos una confusión en el tratamiento de la relación lenguaje - pensamiento que no diferencia mecanismos de contenidos. Aquí pretendemos diferenciarlos con claridad, con el fin de reorganizar las discusiones estancas en relación con enfoques universalistas y relativistas. Para ello - desde una perspectiva lingüística - discutiremos qué es pensamiento, reflexionaremos sobre distinciones y similitudes entre los conceptos y los significados lingüísticos y mostraremos cómo se instancian los primeros en lenguas particulares. Palabras clave: Conceptos, Gramática Universal, Relativismo Lingüístico Abstract The main aim of this paper is to show the plausibility of the thesis that human beings think in natural language. We are not supposing that this posture would imply a radical relativism for the functioning of the mind, instead, we will argue that it is possible to uphold an innatist and universalist thesis for the mechanisms of thought - sustained by our Faculty of Language – and a relativist one for the idiosyncratic contents of thought involved in that process. We will show that Fodor's proposal (1975, 2008) on the existence of a universal Mentalesse- Language of Thought - supposes a mechanism of translation of universal thought to particular languages. This is difficult to sustain without accepting ad hoc motivations, so it appears as an uneconomical approach. Across the bibliography, we find a kind of confusion in the treatment of the “Language – Thought” relationship that does not differentiate content from mechanisms. Here we intend to differentiate them clearly, in order to clarify the discussions between universalist and relativist approaches. To do so we will discuss - from a linguistic perspective - what is Thought; we will reflect on distinctions and similarities between concepts and linguistic meanings and show how concepts are instantiated in natural languages. Keywords: Concepts, Universal Grammar, Linguistic Relativism
Method
Full-text available
This file serves to provide a comprehensive (self-archived) bibliography of references related to working memory and language learning (including first and second language acquisition and processing). I am putting this up here in the hope that it can help those researchers and students interested in similar research topics as I am (as enthusiastic with working memory and language as I am). I will try my best to update this regularly (so keep an eye on it!). Enjoy!
Article
Burkart et al.'s proposal is based on three false premises: (1) theories of the mind are either domain-specific/modular (DSM) or domain-general (DG); (2) DSM systems are considered inflexible, built by nature; and (3) animal minds are deemed as purely DSM. Clearing up these conceptual confusions is a necessary first step in understanding how general intelligence evolved.
Article
Full-text available
Only humans possess the faculty of language that allows an infinite array of hierarchically structured expressions (Hauser et al., 2002; Berwick and Chomsky, 2015). Similarly, humans have a capacity for infinite natural numbers, while all other species seem to lack such a capacity (Gelman and Gallistel, 1978; Dehaene, 1997). Thus, the origin of this numerical capacity and its relation to language have been of much interdisciplinary interest in developmental and behavioral psychology, cognitive neuroscience, and linguistics (Dehaene, 1997; Hauser et al., 2002; Pica et al., 2004). Hauser et al. (2002) and Chomsky (2008) hypothesize that a recursive generative operation that is central to the computational system of language (called Merge) can give rise to the successor function in a set-theoretic fashion, from which capacities for discretely infinite natural numbers may be derived. However, a careful look at two domains in language, grammatical number and numerals, reveals no trace of the successor function. Following behavioral and neuropsychological evidence that there are two core systems of number cognition innately available, a core system of representation of large, approximate numerical magnitudes and a core system of precise representation of distinct small numbers (Feigenson et al., 2004), I argue that grammatical number reflects the core system of precise representation of distinct small numbers alone. In contrast, numeral systems arise from integrating the pre-existing two core systems of number and the human language faculty. To the extent that my arguments are correct, linguistic representations of number, grammatical number, and numerals do not incorporate anything like the successor function.
Article
Full-text available
Questions related to the uniqueness of language can only be addressed properly by referring to sound knowledge of the relevant cognitive abilities of nonhuman animals. A key question concerns the nature and extent of animal rule-learning abilities. I discuss two approaches used to assess these abilities. One is comparing the structures of animal vocalizations to linguistic ones, and another is addressing the grammatical rule- and pattern-learning abilities of animals through experiments using artificial grammars. Neither of these approaches has so far provided unambiguous evidence of advanced animal abilities. However, when we consider how animal vocalizations are analyzed, the types of stimuli and tasks that are used in artificial grammar learning experiments, the limited number of species examined, and the groups to which these belong, I argue that the currently available evidence is insufficient to arrive at firm conclusions concerning the limitations of animal grammatical abilities. As a consequence, the gap between human linguistic rule-learning abilities and those of nonhuman animals may be smaller and less clear than is currently assumed. This means that it is still an open question whether a difference in the rule-learning and rule abstraction abilities between animals and humans played the key role in the evolution of language.
Article
Full-text available
Significance Our work addresses the long-standing issue of the relationship between mathematics and language. By scanning professional mathematicians, we show that high-level mathematical reasoning rests on a set of brain areas that do not overlap with the classical left-hemisphere regions involved in language processing or verbal semantics. Instead, all domains of mathematics we tested (algebra, analysis, geometry, and topology) recruit a bilateral network, of prefrontal, parietal, and inferior temporal regions, which is also activated when mathematicians or nonmathematicians recognize and manipulate numbers mentally. Our results suggest that high-level mathematical thinking makes minimal use of language areas and instead recruits circuits initially involved in space and number. This result may explain why knowledge of number and space, during early childhood, predicts mathematical achievement.
Article
Full-text available
The relationship between recursive sentence embedding and theory-of-mind (ToM) inference is investigated in three persons with Broca's aphasia, two persons with Wernicke's aphasia, and six persons with mild and moderate Alzheimer's disease (AD). We asked questions of four types about photographs of various real-life situations. Type 4 questions asked participants about intentions, thoughts, or utterances of the characters in the pictures (“What may X be thinking/asking Y to do?”). The expected answers typically involved subordinate clauses introduced by conjunctions or direct quotations of the characters' utterances. Broca's aphasics did not produce answers with recursive sentence embedding. Rather, they projected themselves into the characters' mental states and gave direct answers in the first person singular, with relevant ToM content. We call such replies “situative statements.” Where the question concerned the mental state of the character but did not require an answer with sentence embedding (“What does X hate?”), aphasics gave descriptive answers rather than situative statements. Most replies given by persons with AD to Type 4 questions were grammatical instances of recursive sentence embedding. They also gave a few situative statements but the ToM content of these was irrelevant. In more than one third of their well-formed sentence embeddings, too, they conveyed irrelevant ToM contents. Persons with moderate AD were unable to pass secondary false belief tests. The results reveal double dissociation: Broca's aphasics are unable to access recursive sentence embedding but they can make appropriate ToM inferences; moderate AD persons make the wrong ToM inferences but they are able to access recursive sentence embedding. The double dissociation may be relevant for the nature of the relationship between the two recursive capacities. Broca's aphasics compensated for the lack of recursive sentence embedding by recursive ToM reasoning represented in very simple syntactic forms: they used one recursive subsystem to stand in for another recursive subsystem.
Article
Full-text available
The more potential helpers there are, the less likely any individual is to help. A traditional explanation for this bystander effect is that responsibility diffuses across the multiple bystanders, diluting the responsibility of each. We investigate an alternative, which combines the volunteer's dilemma (each bystander is best off if another responds) with recursive theory of mind (each infers what the others know about what he knows) to predict that actors will strategically shirk when they think others feel compelled to help. In 3 experiments, participants responded to a (fictional) person who needed help from at least 1 volunteer. Participants were in groups of 2 or 5 and had varying information about whether other group members knew that help was needed. As predicted, people's decision to help zigzagged with the depth of their asymmetric, recursive knowledge (e.g., "John knows that Michael knows that John knows help is needed"), and replicated the classic bystander effect when they had common knowledge (everyone knowing what everyone knows). The results demonstrate that the bystander effect may result not from a mere diffusion of responsibility but specifically from actors' strategic computations. (PsycINFO Database Record
Article
Full-text available
The most critical attribute of human language is its unbounded combinatorial nature: smaller elements can be combined into larger structures on the basis of a grammatical system, resulting in a hierarchy of linguistic units, such as words, phrases and sentences. Mentally parsing and representing such structures, however, poses challenges for speech comprehension. In speech, hierarchical linguistic structures do not have boundaries that are clearly defined by acoustic cues and must therefore be internally and incrementally constructed during comprehension. We found that, during listening to connected speech, cortical activity of different timescales concurrently tracked the time course of abstract linguistic structures at different hierarchical levels, such as words, phrases and sentences. Notably, the neural tracking of hierarchical linguistic structures was dissociated from the encoding of acoustic cues and from the predictability of incoming words. Our results indicate that a hierarchy of neural processing timescales underlies grammar-based internal construction of hierarchical linguistic structure.
Article
Full-text available
Linguistic analyses suggest that sentences are not mere strings of words but possess a hierarchical structure with constituents nested inside each other. We used functional magnetic resonance imaging (fMRI) to search for the cerebral mechanisms of this theoretical construct. We hypothesized that the neural assembly that encodes a constituent grows with its size, which can be approximately indexed by the number of words it encompasses. We therefore searched for brain regions where activation increased parametri-cally with the size of linguistic constituents, in response to a visual stream always comprising 12 written words or pseudowords. The results isolated a network of left-hemispheric regions that could be dissociated into two major subsets. Inferior frontal and posterior temporal regions showed constituent size effects regardless of whether actual content words were present or were replaced by pseudowords (jabberwocky stimuli). This observation suggests that these areas operate autonomously of other language areas and can extract abstract syntactic frames based on function words and morphological information alone. On the other hand, regions in the temporal pole, anterior superior temporal sulcus and temporo-parietal junction showed constituent size effect only in the presence of lexico-semantic information, suggesting that they may encode semantic constituents. In several inferior frontal and superior temporal regions, activation was delayed in response to the largest constituent structures, suggesting that nested linguistic structures take increasingly longer time to be computed and that these delays can be measured with fMRI.
Article
Full-text available
Comparative pattern learning experiments investigate how different species find regularities in sensory input, providing insights into cognitive processing in humans and other animals. Past research has focused either on one species' ability to process pattern classes or different species' performance in recognizing the same pattern, with little attention to individual and species-specific heuristics and decision strategies. We trained and tested two bird species, pigeons (Columba livia) and kea (Nestor notabilis, a parrot species), on visual patterns using touch-screen technology. Patterns were composed of several abstract elements and had varying degrees of structural complexity. We developed a model selection paradigm, based on regular expressions, that allowed us to reconstruct the specific decision strategies and cognitive heuristics adopted by a given individual in our task. Individual birds showed considerable differences in the number, type and heterogeneity of heuristic strategies adopted. Birds' choices also exhibited consistent species-level differences. Kea adopted effective heuristic strategies, based on matching learned bigrams to stimulus edges. Individual pigeons, in contrast, adopted an idiosyncratic mix of strategies that included local transition probabilities and global string similarity. Although performance was above chance and quite high for kea, no individual of either species provided clear evidence of learning exactly the rule used to generate the training stimuli. Our results show that similar behavioral outcomes can be achieved using dramatically different strategies and highlight the dangers of combining multiple individuals in a group analysis. These findings, and our general approach, have implications for the design of future pattern learning experiments, and the interpretation of comparative cognition research more generally. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Article
Full-text available
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Article
Full-text available
The inferior frontal gyrus (IFG) is active during both goal-directed action and while observing the same motor act, leading to the idea that also the meaning of a motor act (action understanding) is represented in this "mirror neuron system" (MNS). However, in the dual-loop model, based on dorsal and ventral visual streams, the MNS is thought to be a function of the dorsal steam, projecting to pars opercularis (BA44) of IFG, while recent studies suggest that conceptual meaning and semantic analysis are a function of ventral connections, projecting mainly to pars triangularis (BA45) of IFG. To resolve this discrepancy, we investigated action observation (AO) and imitation (IMI) using fMRI in a large group of subjects. A grasping task (GR) assessed the contribution from movement without AO. We analyzed connections of the MNS-related areas within IFG with postrolandic areas with the use of activation-based DTI. We found that action observation with imitation are mainly a function of the dorsal stream centered on dorsal part of BA44, but also involve BA45, which is dorsally and ventrally connected to the same postrolandic regions. The current finding suggests that BA45 is the crucial part where the MNS and the dual-loop system interact. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Article
Full-text available
Humans have a strong proclivity for structuring and patterning stimuli: Whether in space or time, we tend to mentally order stimuli in our environment and organize them into units with specific types of relationships. A crucial prerequisite for such organization is the cognitive ability to discern and process regularities among multiple stimuli. To investigate the evolutionary roots of this cog-nitive capacity, we tested chimpanzees—which, along with bonobos, are our closest living relatives—for simple, var-iable distance dependency processing in visual patterns. We trained chimpanzees to identify pairs of shapes either linked by an arbitrary learned association (arbitrary asso-ciative dependency) or a shared feature (same shape, fea-ture-based dependency), and to recognize strings where items related to either of these ways occupied the first (leftmost) and the last (rightmost) item of the stimulus. We then probed the degree to which subjects generalized this pattern to new colors, shapes, and numbers of interspersed items. We found that chimpanzees can learn and generalize both types of dependency rules, indicating that the ability to encode both feature-based and arbitrary associative regularities over variable distances in the visual domain is not a human prerogative. Our results strongly suggest that these core components of human structural processing were already present in our last common ancestor with chimpanzees. Keywords Feature based Arbitrary associative Operant task Touch screen Non-human primates
Article
Full-text available
Human ancestors first modified stones into tools 2.6 million years ago, initiating a cascading increase in technological complexity that continues today. A parallel trend of brain expansion during the Paleolithic has motivated over 100 years of theorizing linking stone toolmaking and human brain evolution, but empirical support remains limited. Our study provides the first direct experimental evidence identifying likely neuroanatomical targets of natural selection acting on toolmaking ability. Subjects received MRI and DTI scans before, during, and after a 2-year Paleolithic toolmaking training program. White matter fractional anisotropy (FA) showed changes in branches of the superior longitudinal fasciculus leading into left supramarginal gyrus, bilateral ventral precentral gyri, and right inferior frontal gyrus pars triangularis. FA increased from Scan 1-2, a period of intense training, and decreased from Scan 2-3, a period of reduced training. Voxel-based morphometry found a similar trend toward gray matter expansion in the left supramarginal gyrus from Scan 1-2 and a reversal of this effect from Scan 2-3. FA changes correlated with training hours and with motor performance, and probabilistic tractography confirmed that white matter changes projected to gray matter changes and to regions that activate during Paleolithic toolmaking. These results show that acquisition of Paleolithic toolmaking skills elicits structural remodeling of recently evolved brain regions supporting human tool use, providing a mechanistic link between stone toolmaking and human brain evolution. These regions participate not only in toolmaking, but also in other complex functions including action planning and language, in keeping with the hypothesized co-evolution of these functions.
Article
Full-text available
Understanding the evolution of language requires evidence regarding origins and processes that led to change. In the last 40 years, there has been an explosion of research on this problem as well as a sense that considerable progress has been made. We argue instead that the richness of ideas is accompanied by a poverty of evidence, with essentially no explanation of how and why our linguistic computations and representations evolved. We show that, to date, (1) studies of nonhuman animals provide virtually no relevant parallels to human linguistic communication, and none to the underlying biological capacity; (2) the fossil and archaeological evidence does not inform our understanding of the computations and representations of our earliest ancestors, leaving details of origins and selective pressure unresolved; (3) our understanding of the genetics of language is so impoverished that there is little hope of connecting genes to linguistic processes any time soon; (4) all modeling attempts have made unfounded assumptions, and have provided no empirical tests, thus leaving any insights into language's origins unverifiable. Based on the current state of evidence, we submit that the most fundamental questions about the origins and evolution of our linguistic capacity remain as mysterious as ever, with considerable uncertainty about the discovery of either relevant or conclusive evidence that can adjudicate among the many open hypotheses. We conclude by presenting some suggestions about possible paths forward.
Article
Full-text available
Progress in understanding cognition requires a quantitative, theoretical framework, grounded in the other natural sciences and able to bridge between implementational, algorithmic and computational levels of explanation. I review recent results in neuroscience and cognitive biology that, when combined, provide key components of such an improved conceptual framework for contemporary cognitive science. Starting at the neuronal level, I first discuss the contemporary realization that single neurons are powerful tree-shaped computers, which implies a reorientation of computational models of learning and plasticity to a lower, cellular, level. I then turn to predictive systems theory (predictive coding and prediction-based learning) which provides a powerful formal framework for understanding brain function at a more global level. Although most formal models concerning predictive coding are framed in associationist terms, I argue that modern data necessitate a reinterpretation of such models in cognitive terms: as model-based predictive systems. Finally, I review the role of the theory of computation and formal language theory in the recent explosion of comparative biological research attempting to isolate and explore how different species differ in their cognitive capacities. Experiments to date strongly suggest that there is an important difference between humans and most other species, best characterized cognitively as a propensity by our species to infer tree structures from sequential data. Computationally, this capacity entails generative capacities above the regular (finite-state) level; implementationally it requires some neural equivalent of a push-down stack. I dub this unusual human propensity “dendrophilia”, and make a number of concrete suggestions about how such a system may be implemented in the human brain, about how and why it evolved, and what this implies for models of language acquisition. I conclude that, although much remains to be done, a neurally-grounded framework for theoretical cognitive science is within reach that can move beyond polarized debates and provide a more adequate theoretical future for cognitive biology.
Article
Full-text available
Deficits in the ability to understand and predict others' mental states is one of the central features of traumatic brain injury (TBI), leading to problems in social-daily life such as social withdrawal and inability to maintain work or family relationships. Although several functional neuroimaging studies have identified a widely distributed brain network involved in the reading the mind in the eyes test (RMET), the necessary brain regions engaged in this capacity are still heavily debated. In this study, we combined the RMET with a whole-brain voxel-based lesion symptom mapping (VLSM) approach to identify brain regions necessary for adequate RMET performance in a large sample of patients with penetrating TBI (pTBI). Our results revealed that pTBI patients performed worse on the RMET compared to non-head injured controls, and impaired RMET performance was associated with lesions in the left inferior frontal gyrus (IFG). In summary, our findings suggest that the left IFG is a key region in reading the mind in the eyes; probably involved in a more general impairment of a semantic working memory system that facilitates reasoning about what others are feeling and thinking as expressed by the eyes.
Article
Full-text available
Several theoretical proposals for the evolution of language have sparked a renewed search for comparative data on human and non-human animal computational capacities. However, conceptual confusions still hinder the field, leading to experimental evidence that fails to test for comparable human competences. Here we focus on two conceptual and methodological challenges that affect the field generally: 1) properly characterizing the computational features of the faculty of language in the narrow sense; 2) defining and probing for human language-like computations via artificial language learning experiments in non-human animals. Our intent is to be critical in the service of clarity, in what we agree is an important approach to understanding how language evolved.
Article
Full-text available
Sixty years ago, Karl Lashley suggested that complex action sequences, from simple motor acts to language and music, are a fundamental but neglected aspect of neural function. Lashley demonstrated the inadequacy of then-standard models of associative chaining, positing a more flexible and generalized “syntax of action” necessary to encompass key aspects of language and music. He suggested that hierarchy in language and music builds upon a more basic sequential action system, and provided several concrete hypotheses about the nature of this system. Here, we review a diverse set of modern data concerning musical, linguistic, and other action processing, finding them largely consistent with an updated neuroanatomical version of Lashley's hypotheses. In particular, the lateral premotor cortex, including Broca's area, plays important roles in hierarchical processing in language, music, and at least some action sequences. Although the precise computational function of the lateral prefrontal regions in action syntax remains debated, Lashley's notion—that this cortical region implements a working-memory buffer or stack scannable by posterior and subcortical brain regions—is consistent with considerable experimental data.
Article
Full-text available
It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language—the faculty of language in the narrow sense (FLN)—is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded—existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled—and potentially unbounded—expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive.
Article
Full-text available
Although developmental psychologists traditionally explore morality from a learning and development perspective, some aspects of the human moral sense may be built-in, having evolved to sustain collective action and cooperation as required for successful group living. In this article, I review a recent body of research with infants and toddlers, demonstrating surprisingly sophisticated and flexible moral behavior and evaluation in a preverbal population whose opportunity for moral learning is limited at best. Although this work itself is in its infancy, it supports theoretical claims that human morality is a core aspect of human nature.
Article
Full-text available
Hierarchical structure with nested nonlocal dependencies is a key feature of human language and can be identified theoretically in most pieces of tonal music. However, previous studies have argued against the perception of such structures in music. Here, we show processing of nonlocal dependencies in music. We presented chorales by J. S. Bach and modified versions in which the hierarchical structure was rendered irregular whereas the local structure was kept intact. Brain electric responses differed between regular and irregular hierarchical structures, in both musicians and nonmusicians. This finding indicates that, when listening to music, humans apply cognitive processes that are capable of dealing with long-distance dependencies resulting from hierarchically organized syntactic structures. Our results reveal that a brain mechanism fundamental for syntactic processing is engaged during the perception of music, indicating that processing of hierarchical structure with nested nonlocal dependencies is not just a key component of human language, but a multidomain capacity of human cognition.
Article
Full-text available
This article explores the evolution of language, focusing on insights derived from observations and experiments in animals, guided by current theoretical problems that were inspired by the generative theory of grammar, and carried forward in substantial ways to the present by psycholinguists working on child language acquisition. We suggest that over the past few years, there has been a shift with respect to empirical studies of animals targeting questions of language evolution. In particular, rather than focus exclusively on the ways in which animals communicate, either naturally or by means of artificially acquired symbol systems, more recent work has focused on the underlying computational mechanisms subserving the language faculty and the ability of nonhuman animals to acquire these in some form. This shift in emphasis has brought biologists studying animals in closer contact with linguists studying the formal aspects of language, and has opened the door to a new line of empirical inquiry that we label evolingo. Here we review some of the exciting new findings in the evolingo area, focusing in particular on aspects of semantics and syntax. With respect to semantics, we suggest that some of the apparently distinctive and uniquely linguistic conceptual distinctions may have their origins in nonlinguistic conceptual representations; as one example, we present data on nonhuman primates and their capacity to represent a singular–plural distinction in the absence of language. With respect to syntax, we focus on both statistical and rule-based problems, especially the most recent attempts to explore different layers within the Chomsky hierarchy; here, we discuss work on tamarins and starlings, highlighting differences in the patterns of results as well as differences in methodology that speak to potential issues of learnability. We conclude by highlighting some of the exciting questions that lie ahead, as well as some of the methodological challenges that face both comparative and developmental studies of language evolution.
Book
The coming of language occurs at about the same age in every healthy child throughout the world, strongly supporting the concept that genetically determined processes of maturation, rather than environmental influences, underlie capacity for speech and verbal understanding. Dr. Lenneberg points out the implications of this concept for the therapeutic and educational approach to children with hearing or speech deficits.
Article
The long-term consequences of early prefrontal cortex lesions occurring before 16 months were investigated in two adults. As is the case when such damage occurs in adulthood, the two early-onset patients had severely impaired social behavior despite normal basic cognitive abilities, and showed insensitivity to future consequences of decisions, defective autonomic responses to punishment contingencies and failure to respond to behavioral interventions. Unlike adult-onset patients, however, the two patients had defective social and moral reasoning, suggesting that the acquisition of complex social conventions and moral rules had been impaired. Thus early-onset prefrontal damage resulted in a syndrome resembling psychopathy.
Article
This chapter provides more comments on the discussion in Chapter 2. It begins by pointing out some fairly obvious and mostly well-known similarities between music and language, and specifically how aspects of Fabb and Halle's proposals reflect these. Observing, then, that the similarities between language and music appear to run quite deep, it speculates on what the reason for this might be. This leads to a brief introduction to the detailed conception of the faculty of language put forward by Hauser, Chomsky, and Fitch (2002). In terms of their approach, the chapter suggests that language and music have in common the core computational system: in other words, at root, the relation between these two human cognitive capacities is not one of similarity or shared evolutionary origin, as has often been suggested, but rather identity. Language and music differ in that the single computational system common to both relates to distinct interfaces in each case: most importantly, language has a propositional or logical interface which music does not have. Both the richness of the natural-language lexicon and the duality of patterning characteristic of natural language may be indirect consequences of this; hence music has a relatively impoverished lexicon and does not appear in any obvious way to show duality of patterning. The tentative conclusion is thus: natural language and music share the same computational system.
Article
A sequence of images, sounds, or words can be stored at several levels of detail, from specific items and their timing to abstract structure. We propose a taxonomy of five distinct cerebral mechanisms for sequence coding: transitions and timing knowledge, chunking, ordinal knowledge, algebraic patterns, and nested tree structures. In each case, we review the available experimental paradigms and list the behavioral and neural signatures of the systems involved. Tree structures require a specific recursive neural code, as yet unidentified by electrophysiology, possibly unique to humans, and which may explain the singularity of human language and cognition.
Article
Significance The 18th-century Prussian philosopher Wilhelm von Humbolt famously noted that natural language makes “infinite use of finite means.” By this, he meant that language deploys a finite set of words to express an effectively infinite set of ideas. As the seat of both language and thought, the human brain must be capable of rapidly encoding the multitude of thoughts that a sentence could convey. How does this work? Here, we find evidence supporting a long-standing conjecture of cognitive science: that the human brain encodes the meanings of simple sentences much like a computer, with distinct neural populations representing answers to basic questions of meaning such as “Who did it?” and “To whom was it done?”
Article
I ask why humans are smarter than other primates, and I hypothesize that an important part of the answer lies in the Inner Language Hypothesis, a prerequisite to what I call the Strong Story Hypothesis, which holds that story telling and understanding have a central role in human intelligence. Next, I introduce the Directed Perception Hypothesis, which holds that we derive much of our common sense, including the common sense required in story understanding, by deploying our perceptual apparatus on real and imagined events. Both the Strong Story Hypothesis and the Directed Perception Hypothesis become more valuable in light of our social nature, an idea captured in the Social Animal Hypothesis. Then, after discussing methodology, I describe the representations and methods embodied in Genesis, a story-understanding system that analyzes stories ranging from pr´ecis of Shakespeare’s plots to descriptions of conflicts in cyberspace. Genesis works with short story summaries, provided in English, together with low-level common-sense rules and higher-level concept patterns, likewise expressed in English. Using only a small collection of common-sense rules and concept patterns, Genesis demonstrates several story-understanding capabilities, such as determining that both Macbeth and the 2007 Russia-Estonia Cyberwar involve revenge, even though neither the word revenge nor any of its synonyms are mentioned.
Article
One of the major discoveries in the history of 20th century linguistics is that the linear sequence of words constituting a sentence is organized in a hierarchical and recursive fashion. Is this hierarchical structure similar to action and motor planning, as recent proposals suggest? Some crucial differences are highlighted on both theoretical and empirical grounds that make this parallel unsuitable, with far-reaching consequences for evolutionary perspectives.
Article
In the current resurgence of interest in the biological basis of animal behavior and social organization, the ideas and questions pursued by Charles Darwin remain fresh and insightful. This is especially true of The Descent of Man and Selection in Relation to Sex, Darwin's second most important work. This edition is a facsimile reprint of the first printing of the first edition (1871), not previously available in paperback. The work is divided into two parts. Part One marshals behavioral and morphological evidence to argue that humans evolved from other animals. Darwin shoes that human mental and emotional capacities, far from making human beings unique, are evidence of an animal origin and evolutionary development. Part Two is an extended discussion of the differences between the sexes of many species and how they arose as a result of selection. Here Darwin lays the foundation for much contemporary research by arguing that many characteristics of animals have evolved not in response to the selective pressures exerted by their physical and biological environment, but rather to confer an advantage in sexual competition. These two themes are drawn together in two final chapters on the role of sexual selection in humans. In their Introduction, Professors Bonner and May discuss the place of The Descent in its own time and relation to current work in biology and other disciplines.