Jeffrey Watumull’s research while affiliated with Massachusetts Institute of Technology and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (11)


Figure 1
Figure 2
Figure 3
Figure 5
Figure 6

+1

Language Is a “Quite Useless” Tool: A Rejoinder to Fedorenko, Piantadosi, and Gibson’s “Language Is Primarily a Tool for Communication Rather Than Thought”
  • Article
  • Full-text available

October 2024

·

91 Reads

·

2 Citations

Biolinguistics

Jeffrey Watumull

Contrary to the prevailing assumption that language is “primarily a tool for communication rather than thought”, I argue that language is, to invoke Oscar Wilde, “quite useless”. Arguing from aesthetic philosophy and the minimalist program for linguistic theory, I conject that language, like art, is not “for” anything—it simply is , conforming to aesthetic rather than utilitarian principles. Of course, like art, language can be a powerful instrument of communication, but its function is not that of expressing thought; it creates thoughts, “primarily” for communicating with oneself, engaging in Popperian critical rationalism, making thoughts (e.g., sentences, constructive proofs) to match Platonic objects (e.g., propositions, classical proofs).

Download

The Universal Generative Faculty: The source of our expressive power in language, mathematics, morality, and music

November 2016

·

2,014 Reads

·

51 Citations

Journal of Neurolinguistics

(in press) Journal of Neurolinguistics, Special Issue " Language evolution: on the origin of the lexical and syntactic structures " Many have argued that the expressive power of human thought comes from language. Language plays this role, so the argument goes, because its generative computations construct hierarchically structured, abstract representations, covering virtually any content and communicated in linguistic expressions. However, language is not the only domain to implement generative computations and abstract representations, and linguistic communication is not the only medium of expression. Mathematics, morality, and music are three others. These similarities are not, we argue, accidental. Rather, we suggest they derive from a common computational system that we call the Universal Generative Faculty or UGF. UGF is, at its core, a suite of contentless generative procedures that interface with different domains of knowledge to create contentful expressions in thought and action. The representational signatures of different domains are organized and synthesized by UGF into a global system of thought. What was once considered the language of thought is, on our view, the more specific operation of UGF and its interfaces to different conceptual domains. This view of the mind changes the conversation about domain-specificity, evolution, and development. On domain-specificity, we suggest that if UGF provides the gener-ative engine for different domains of human knowledge, then the specificity of a given domain (e.g., language, mathematics, music, morality) is restricted to its repository of primitive representations and to its interfaces with UGF. Evolutionarily, some generative computations are shared with other animals (e.g., combinatorics), both for recognition-learning and generation-production, whereas others are uniquely human (e.g., recursion); in some cases, the cross-species parallels may be restricted to recognition-learning, with no observable evidence of generation-production. Further, many of the differences observed between humans and other animals, as well as among nonhuman animals, are the result of differences in the interfaces: whereas humans promiscuously traverse (consciously and unconsciously) interface conditions so as to combine and analogize concepts across many domains, nonhuman animals are far more limited , often restricted to a specific domain as well as a specific sensory modality within the domain. Developmentally, the UGF perspective may help explain why the generative powers of different domains appear at different stages of development. In particular, because UGF must interface with domain-specific representations, which develop on different time scales, the generative power of some domains may mature more slowly (e.g., mathematics) than others (e.g., language). This explanation may also contribute to a deeper understanding of cross-cultural differences among human populations , especially cases where the generative power of a domain appears absent (e.g., cultures with only a few count words). This essay provides an introduction to these ideas, including a discussion of implications and applications for evolutionary biology, human cognitive development, cross-cultural variation, and artificial intelligence. omain-specificity | evolution | generative functions | language faculty | re-cursion | Turing machine | Universal Generative Faculty The ideas developed in this essay grow out of several different intellectual traditions within the formal and cogni-tive sciences. Broadly speaking, we are interested in what enables human minds to generate a limitless range of ideas and expressions across many different domains of knowledge. To what extent is this facility enabled by domain-general or domain-specific mechanisms? To what extent are these facilities shared with other organisms and to what extent are they uniquely human? To what extent are the generative mechanisms that operate in different domains of knowledge the same or different, and why? What accounts for the developmental timing and maturation of different domains of knowl-edge? And could the creative, generative power of human intelligence be realized in computing machinery? This essay provides an introductory sketch of an idea that, we believe, helps shed new light on these fundamental questions. Different traditions of thought One tradition that not only launched many of the questions noted above, but developed a significant position on the answers, is Chomsky's (1955; 1995) work in linguistics, and the nature of mind more generally. The argument, in brief, is that humans are endowed with a finite cognitive computational system that generates an infinity of meaningful expressions. This is a linguistic system or faculty, with unique — specific to our species and the domain of language — recursive procedures that interface with both the conceptual-intentional (semantics/pragmatics) and sensory-motor (phonology/phonetics) systems to generate hierarchically structured representations. This intensional system — I-language — is internal to an individual, and is often described as forming a language of thought. The sets of expressions this system enumerates have been described (not by Chomsky) as E-languages (e.g., English, French, Japanese, etc.). Based on Chomsky's linguistic framework, some have argued that language enables the expressive power of all other domains, and in many cases, provides the cognitive glue across domains. Thus, for example, Spelke (2016) has argued that what enables us to integrate different domains or modules of thought, including aspects of space and number, is language. In a classic set of experiments (Hermer and Spelke, 1994; 1996) on spatial reorientation following disorientation, young children appear incapable of integrating information about landmarks with information about the geometry of the space, a result that parallels those originally carried out on rats (Cheng, 1986). Such integration only occurs when children acquire spatially-relevant words (e.g., right of, in front of), the linguistic glue that integrates information from the landmark and geometry systems. Moreover, the flawless performance of adults was reduced to that of young children and rats when they were required to carry out a verbal shadowing task, one that effectively blocks access to the language faculty. This perspective sets up language as both the generative machinery of thought and the system that enables interfaces across domains. Similar ideas have inspired models of artificial intelligence in which the human-like AI understands the world by using linguistic machinery to combine commonsense knowledge with perceptual (particularly visual) representations in the form of explanatory " stories " (Winston, 2012).




The mystery of language evolution

May 2014

·

4,881 Reads

·

303 Citations

Understanding the evolution of language requires evidence regarding origins and processes that led to change. In the last 40 years, there has been an explosion of research on this problem as well as a sense that considerable progress has been made. We argue instead that the richness of ideas is accompanied by a poverty of evidence, with essentially no explanation of how and why our linguistic computations and representations evolved. We show that, to date, (1) studies of nonhuman animals provide virtually no relevant parallels to human linguistic communication, and none to the underlying biological capacity; (2) the fossil and archaeological evidence does not inform our understanding of the computations and representations of our earliest ancestors, leaving details of origins and selective pressure unresolved; (3) our understanding of the genetics of language is so impoverished that there is little hope of connecting genes to linguistic processes any time soon; (4) all modeling attempts have made unfounded assumptions, and have provided no empirical tests, thus leaving any insights into language's origins unverifiable. Based on the current state of evidence, we submit that the most fundamental questions about the origins and evolution of our linguistic capacity remain as mysterious as ever, with considerable uncertainty about the discovery of either relevant or conclusive evidence that can adjudicate among the many open hypotheses. We conclude by presenting some suggestions about possible paths forward.


Conceptual and Methodological Problems with Comparative Work on Artificial Language Learning

April 2014

·

248 Reads

·

5 Citations

Biolinguistics

Several theoretical proposals for the evolution of language have sparked a renewed search for comparative data on human and non-human animal computational capacities. However, conceptual confusions still hinder the field, leading to experimental evidence that fails to test for comparable human competences. Here we focus on two conceptual and methodological challenges that affect the field generally: 1) properly characterizing the computational features of the faculty of language in the narrow sense; 2) defining and probing for human language-like computations via artificial language learning experiments in non-human animals. Our intent is to be critical in the service of clarity, in what we agree is an important approach to understanding how language evolved.


Conceptual and empirical problems with game theoretic approaches to language evolution

March 2014

·

150 Reads

·

6 Citations

The importance of game theoretic models to evolutionary theory has been in formulating elegant equations that specify the strategies to be played and the conditions to be satisfied for particular traits to evolve. These models, in conjunction with experimental tests of their predictions, have successfully described and explained the costs and benefits of varying strategies and the dynamics for establishing equilibria in a number of evolutionary scenarios, including especially cooperation, mating, and aggression. Over the past decade or so, game theory has been applied to model the evolution of language. In contrast to the aforementioned scenarios, however, we argue that these models are problematic due to conceptual confusions and empirical difficiences. In particular, these models conflate the comptutations and representations of our language faculty (mechanism) with its utility in communication (function); model languages as having different fitness functions for which there is no evidence; depend on assumptions for the starting state of the system, thereby begging the question of how these systems evolved; and to date, have generated no empirical studies at all. Game theoretic models of language evolution have therefore failed to advance how or why language evolved, or why it has the particular representations and computations that it does. We conclude with some brief suggestions for how this situation might be ameliorated, enabling this important theoretical tool to make substantive empirical contributions.


On recursion

January 2014

·

3,130 Reads

·

48 Citations

It is a truism that conceptual understanding of a hypothesis is required for its empirical investigation. However, the concept of recursion as articulated in the context of linguistic analysis has been perennially confused. Nowhere has this been more evident than in attempts to critique and extend Hauseretal's. (2002) articulation. These authors put forward the hypothesis that what is uniquely human and unique to the faculty of language—the faculty of language in the narrow sense (FLN)—is a recursive system that generates and maps syntactic objects to conceptual-intentional and sensory-motor systems. This thesis was based on the standard mathematical definition of recursion as understood by Gödel and Turing, and yet has commonly been interpreted in other ways, most notably and incorrectly as a thesis about the capacity for syntactic embedding. As we explain, the recursiveness of a function is defined independent of such output, whether infinite or finite, embedded or unembedded—existent or non-existent. And to the extent that embedding is a sufficient, though not necessary, diagnostic of recursion, it has not been established that the apparent restriction on embedding in some languages is of any theoretical import. Misunderstanding of these facts has generated research that is often irrelevant to the FLN thesis as well as to other theories of language competence that focus on its generative power of expression. This essay is an attempt to bring conceptual clarity to such discussions as well as to future empirical investigations by explaining three criterial properties of recursion: computability (i.e., rules in intension rather than lists in extension); definition by induction (i.e., rules strongly generative of structure); and mathematical induction (i.e., rules for the principled—and potentially unbounded—expansion of strongly generated structure). By these necessary and sufficient criteria, the grammars of all natural languages are recursive.


Biolinguistics and Platonism: Contradictory or Consilient?

December 2013

·

114 Reads

·

7 Citations

Biolinguistics

It has been argued that language is a Platonic object, and therefore that a biolinguistic ontology is incoherent. In particular, the notion of language as a system of discrete infinity has been argued to be inconsistent with the assumption of a physical (finite) basis for language. These arguments are flawed. Here I demonstrate that biolinguistics and mathematical Platonism are not mutually exclusive and contradictory, but in fact mutually reinforcing and consilient in a coherent and compelling philosophy of language. This consilience is effected by Turing’s proof of the coherency of a finitely procedure generative of infinite sets.



Citations (10)


... Compared to the previous volume, book reviews have made a comeback with reviews of Merge and the Strong Minimalist Thesis (Chomsky et al., 2023;reviewed in van Gelderen, 2024) and The Philosophy of Theoretical Linguistics: A Contemporary Outlook (Nefdt, 2024;reviewed in Voudouris & Roe, 2024). Lastly, our Forum section has again seen lively discussion with contributions questioning how to evaluate the language abilities of LLMs (Leivada, Dentella, & Günther, 2024), arguing that so-called "laryngeal descent theory" for the origin of speech was actually never a popular line of thinking in speech-centric sciences (Ekström, 2024), and providing a minimalist perspective on ongoing discussions about the purpose of language claiming that language evolved primarily as a tool for communication (Watumull, 2024). ...

Reference:

Biolinguistics End-of-Year Notice 2024
Language Is a “Quite Useless” Tool: A Rejoinder to Fedorenko, Piantadosi, and Gibson’s “Language Is Primarily a Tool for Communication Rather Than Thought”

Biolinguistics

... In all of these studies, the observed changes are bilateral, extended, and go beyond the language network per se. Such an extended network does not fit with the hypothesis that a single localised system, such as natural language or a universal generative faculty, is the primary engine of all humanspecific abstract symbolic abilities (Hauser and Watumull, 2017;Spelke, 2003). Rather, our results suggest that multiple parallel and partially dissociable human brain networks possess symbolic abilities and deploy them in different domains such as natural language, music and mathematics (Amalric and Dehaene, 2017;Chen et al., 2021;Dehaene et al., 2022;Fedorenko et al., 2011;Fedorenko and Varley, 2016). ...

The Universal Generative Faculty: The source of our expressive power in language, mathematics, morality, and music
  • Citing Article
  • November 2016

Journal of Neurolinguistics

... The error which we are mainly concerned with is the subject relative pronoun deletion from a relative clause which modifies an indefinite noun of a post-verbal noun phrase (NP) even though the same structure is termed Determiner Phrase (DP) or (AgrP, in more recent works) in a sentence like "Jack is a student Ø doesn't come late" committed in the written work by Arabic speakers of English as a foreign language. This error, in addition to other types of errors in English relative clauses, has been observed in many earlier works, some for pedagogical purposes such as that of Yorkey (1977), Scott and Tucker (1974), Schachter (1974), Hamdallah and Tushyeh (1995), while others such as Lambrecht (1988), Chomsky (1995), Duffield (2009), Fox (2003, and Collins (2015), deal with various theoretical aspects of the relative clause since it is a global structure subject to various processes and/or constraints. Our main concern in this work will concentrate on both the contrastive analyses and the theoretical aspects of the relative clause as they pertain to the problem which our students face, namely the absence of the relative clause subject from English sentences when the relative clause describes an indefinite noun in the post-verbal position of the matrix sentence. ...

50 years later: Reflections on Chomsky's Aspects

... Methodological studies of software programming provide further accurate descriptions of programs. Block diagrams, pseudocode, hierarchical diagrams, module diagrams, Voronoi maps, sequence diagrams, class diagrams and so on are used to illustrate special aspects of the software and to aid experts to implement software products, [22], [23], [24]. Both theoretical and professional models share a common characteristic: they describe a software program that functions correctly, and do not account for the software program that fails or is terminated abnormally. ...

A Turing Program for Linguistic Theory

Biolinguistics

... Language, a uniquely human trait and cooperative layered system, has been a topic of interest across centuries and research disciplines because its emergence remains a scientific puzzle (Christiansen and Kirby 2003;Fitch 2010;Hauser et al. 2014;Knight et al. 2000). Recently, it has been proposed that the special capacity for social interaction among humans facilitated the evolution of language (Levinson 2006(Levinson , 2016; see also Vygotsky 1962). ...

The mystery of language evolution

... As summarized by Dehaene and colleagues (2015), Fitch (2014), and ten Cate (2016), there is evidence that species within each of these animal groups show abilities to extract statistical-probabilistic patterns and algebraic rules, with extremely limited evidence within the space of generative algorithms, mostly restricted to the lowest level of the Chomsky Hierarchy -that is, regular languages (Rogers & Hauser, 2010). Of the studies showing successful recognition-learning of patterned sequences, however, the majority entail artificially created patterns within one modality, based on extensive training procedures -and in some cases it is arguable whether the animals did in fact perform the claimed computations (discussed in Watumull et al., 2014a); a far smaller set of studies have created patterned material from the species-specific repertoire (Comins and Gentner, 2013), explored visual stimuli , or used non-training spontaneous methods (Abe and Watanabe, 2011). No study has looked at pattern recognition across multiple domains, and to our knowledge, only one study has explored and showed successful transfer across modalities (visual to auditory; (Murphy et al., 2008)). ...

Conceptual and Methodological Problems with Comparative Work on Artificial Language Learning

Biolinguistics

... The most significant advantage of game-theoretic methods relies on the possibility to reuse the rich body of results established by game theorists. The main shortcoming of these methods is their problematic use due to conceptual confusion and empirical deficiencies, as emerged from Watumull and Hauser [28]. ...

Conceptual and empirical problems with game theoretic approaches to language evolution

... Karlsson, 2010). Watumull et al. (2014) criticize the concept of recursion as articulated in linguistic analysis; they point out that "syntactic embedding is a sufficient, though not necessary, diagnostic of recursion" (p. 1). In the interpretation of our data we will extend the concept of recursion beyond linguistic syntax to the recursive logic of theory-of-mind (ToM) reasoning. ...

On recursion

... The rule-based framework purports that the learning of words (e.g., AXC) and nonadjacent structure (e.g., A-C pairings) require separate mechanisms. According to this framework, although basic statistical computations are sufficient for acquiring individual AXC items from speech, learning A-C structural relations are thought to require complex, "algebraic" computations involving rule-like representations to enable generalization (Endress & Bonatti, 2007Endress, Cahill, et al., 2009;Endress, Nespor, & Mehler, 2009;Peña et al., 2002). This account further suggests that positional memory mechanisms may be sufficient to explain sensitivity to nonadjacent dependencies (Endress & Bonatti, 2016), with syllables at the edges of these structures being encoded rather than the nonadjacent structure as a whole (Endress & Bonatti, 2007;. ...

Evidence of an evolutionary precursor to human language affixation in a non-human primate