Article

The impact of generative linguistics on psychology: Language acquisition, a paradigm example

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... For further discussion of the povery of stimulus and language acquisition, I refer the reader toMarino & Gervain (2019) in this issue. ...
Article
Full-text available
Humans have a strong tendency to spontaneously group visual or auditory stimuli together in larger patterns. One of these perceptual grouping biases is formulated as the iambic/trochaic law, where humans group successive tones alternating in pitch and intensity as trochees (high–low and loud–soft) and alternating in duration as iambs (short–long). The grouping of alternations in pitch and intensity into trochees is a human universal and is also present in one non-human animal species, rats. The perceptual grouping of sounds alternating in duration seems to be affected by native language in humans and has so far not been found among animals. In the current study, we explore to which extent these perceptual biases are present in a songbird, the zebra finch. Zebra finches were trained to discriminate between short strings of pure tones organized as iambs and as trochees. One group received tones that alternated in pitch, a second group heard tones alternating in duration, and for a third group, tones alternated in intensity. Those zebra finches that showed sustained correct discrimination were next tested with longer, ambiguous strings of alternating sounds. The zebra finches in the pitch condition categorized ambiguous strings of alternating tones as trochees, similar to humans. However, most of the zebra finches in the duration and intensity condition did not learn to discriminate between training stimuli organized as iambs and trochees. This study shows that the perceptual bias to group tones alternating in pitch as trochees is not specific to humans and rats, but may be more widespread among animals.
Article
Full-text available
Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.
Article
Full-text available
ABSTRACT To achieve language proficiency, infants must find the building blocks of speech and master the rules governing their legal combinations. However, these problems are linked: words are also built according to rules. Here, we explored early morphosyntactic sensitivity by testing when and how infants could find either words or within-word structure in artificial speech snippets embodying properties of morphological constructions. We show that 12-month-olds use statistical relationships between syllables to extract words from continuous streams, but find word-internal regularities only if the streams are segmented. Seven-month-olds fail both tasks. Thus, 12-month-olds infants possess the resources to analyze the internal composition of words if the speech contains segmentation information. However, 7-month-old infants may not possess them, although they can track several statistical relations. This developmental difference suggests that morphosyntactic sensitivity may require computational resources extending beyond the detection of simple statistics.
Article
Full-text available
There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. © 2014 The Authors. Dev Psychobiol Published by Wiley Periodicals, Inc.
Article
Full-text available
Abstracting syntactic rules is critical to human language learning. It is debated whether this ability, already present in young infants, is human- and language specific or can also be found in non-human animals, indicating it may arise from more general cognitive mechanisms. Current studies are often ambiguous and few have directly compared rule learning by humans and non-human animals. In a series of discrimination experiments, we presented zebra finches and human adults with comparable training and tests with the same artificial stimuli consisting of XYX and XXY structures, in which X and Y were zebra finch song elements. Zebra finches readily discriminated the training stimuli. Some birds also discriminated novel stimuli when these were composed of familiar element types, but none of the birds generalized the discrimination to novel element types. We conclude that zebra finches show evidence of simple rule abstraction related to positional learning, suggesting stimulus-bound generalization, but found no evidence for a more abstract rule generalization. This differed from the human adults, who categorized novel stimuli consisting of novel element types into different groups according to their structure. The limited abilities for rule abstraction in zebra finches may indicate what the precursors of more complex abstraction as found in humans may have been like.
Article
Full-text available
Artificial grammars (AG) are designed to emulate aspects of the structure of language, and AG learning (AGL) paradigms can be used to study the extent of nonhuman animals' structure-learning capabilities. However, different AG structures have been used with nonhuman animals and are difficult to compare across studies and species. We developed a simple quantitative parameter space, which we used to summarize previous nonhuman animal AGL results. This was used to highlight an under-studied AG with a forward-branching structure, designed to model certain aspects of the nondeterministic nature of word transitions in natural language and animal song. We tested whether two monkey species could learn aspects of this auditory AG. After habituating the monkeys to the AG, analysis of video recordings showed that common marmosets (New World monkeys) differentiated between well formed, correct testing sequences and those violating the AG structure based primarily on simple learning strategies. By comparison, Rhesus macaques (Old World monkeys) showed evidence for deeper levels of AGL. A novel eye-tracking approach confirmed this result in the macaques and demonstrated evidence for more complex AGL. This study provides evidence for a previously unknown level of AGL complexity in Old World monkeys that seems less evident in New World monkeys, which are more distant evolutionary relatives to humans. The findings allow for the development of both marmosets and macaques as neurobiological model systems to study different aspects of AGL at the neuronal level.
Article
Full-text available
Sensitivity to dependencies (correspondences between distant items) in sensory stimuli plays a crucial role in human music and language. Here, we show that squirrel monkeys (Saimiri sciureus) can detect abstract, non-adjacent dependencies in auditory stimuli. Monkeys discriminated between tone sequences containing a dependency and those lacking it, and generalized to previously unheard pitch classes and novel dependency distances. This constitutes the first pattern learning study where artificial stimuli were designed with the species' communication system in mind. These results suggest that the ability to recognize dependencies represents a capability that had already evolved in humans' last common ancestor with squirrel monkeys, and perhaps before.
Article
Full-text available
At the present time, cochlear implantation is the only available medical intervention for patients with profound hearing loss and is considered the "standard of care" for both prelingually deaf infants and post-lingually deaf adults. It has been suggested recently that cochlear implants are one of the greatest accomplishments of auditory neuroscience. Despite the enormous success of cochlear implantation for the treatment of profound deafness, especially in young prelingually deaf children, several pressing unresolved clinical issues have emerged that are at the forefront of current research efforts in the field. In this commentary we briefly review how a cochlear implant works and then discuss five of the most critical clinical and basic research issues: (1) individual differences in outcome and benefit, (2) speech perception in noise, (3) music perception, (4) neuroplasticity and perceptual learning, and (5) binaural hearing.
Article
Full-text available
''Phonological bootstrapping" is the hypothesis that a purely phonological analysis of the speech signal may allow infants to start acquiring the lexicon and syntax of their native language (Morgan & Demuth, 1996a). To assess this hypothesis, a first step is to estimate how much information is provided by a phonological analysis of the speech input conducted in the absence of any prior (language-specific) knowledge in other domains such as syntax or semantics. We first review existing work on how babies may start acquiring a lexicon by relying on distributional regularities, phonotactics, typical word shape and prosodic boundary cues. Taken together, these sources of information may enable babies to learn the sound pattern of a reasonable number of the words in their native language. We then focus on syntax acquisition and discuss how babies may set one of the major structural syntactic parameters, the head direction parameter, by listening to prominence within phonological phrases and before they possess any words. Next, we discuss how babies may hope to acquire function words early, and how this knowledge would help lexical segmentation and acquisition, as well as syntactic analysis and acquisition. We then present a model of phonological bootstrapping of the lexicon and syntax that helps us to illustrate the congruence between problems. Some sources of information appear to be useful for more than one purpose; for example, phonological phrases and function words may help lexical segmentation as well as segmentation into syntactic phrases and labelling (NP, VP, etc.). Although our model derives directly from our reflection on acquisition, we argue that it may also be adequate as a model of adult speech processing. Since adults allow a greater variety of experimental paradigms, an advantage of our approach is that specific hypotheses can be tested on both populations. We illustrate this aspect in the final section of the paper, where we present the results of an adult experiment which indicates that prosodic boundaries and function words play an important role in continuous speech processing.
Article
Artificial grammar learning (AGL) paradigms have proven to be productive and useful to investigate how young infants break into the grammar of their native language(s). The question of when infants first show the ability to learn abstract grammatical rules has been central to theoretical debates about the innate vs. learned nature of grammar. The presence of this ability early in development, that is, before considerable experience with language, has been argued to provide evidence for a biologically endowed ability to acquire language. Artificial grammar learning tasks also allow infant populations to be readily compared with adults and non‐human animals. Artificial grammar learning paradigms with infants have been used to investigate a number of linguistic phenomena and learning tasks, from word segmentation to phonotactics and morphosyntax. In this review, we focus on AGL studies testing infants’ ability to learn grammatical/structural properties of language. Specifically, we discuss the results of AGL studies focusing on repetition‐based regularities, the categorization of functors, adjacent and non‐adjacent dependencies, and word order. We discuss the implications of the results for a general theory of language acquisition, and we outline some of the open questions and challenges.
Article
Extracting the regularities of our environment is a core cognitive ability in human and non‐human primates. Comparative studies may provide information of strong heuristic value to constrain the elaboration of computational models of regularity learning. This study illustrates this point by testing human and non‐human primates (Guinea baboons, Papio papio) with the same experimental paradigm, using a novel online learning measure. For local co‐occurrence regularities, we found similar patterns of regularity extraction in baboons and humans. However, only humans extracted the more global sequence structure. It is proposed that only the first result that is common to both species should be used to constrain models of regularity learning. The second result indicates that the extraction of global regularities cannot be accounted for by mere associative learning mechanisms and suggests that humans probably benefit from their language recoding abilities for extracting these regularities. We propose to use a comparative approach to address a series of remaining theoretical questions, which will contribute to the development of a general theory of regularity learning.
Article
Human and non‐human primates share the ability to extract adjacent dependencies and, under certain conditions, non‐adjacent dependencies (i.e., predictive relationships between elements that are separated by one or several intervening elements in a sequence). In this study, we explore the online extraction dynamics of non‐adjacent dependencies in humans and baboons using a serial reaction time task. Participants had to produce three‐target sequences containing deterministic relationships between the first and last target locations. In Experiment 1, participants from the two species could extract these non‐adjacent dependencies, but humans required less exposure than baboons. In Experiment 2, the data show for the first time in a non‐human primate species the successful generalization of sequential non‐adjacent dependencies over novel intervening items. These findings provide new evidence to further constrain current theories about the nature and the evolutionary origins of the learning mechanisms allowing the extraction of non‐adjacent dependencies.
Article
Although deep learning has historical roots going back decades, neither the term "deep learning" nor the approach was popular just over five years ago, when the field was reignited by papers such as Krizhevsky, Sutskever and Hinton's now classic (2012) deep network model of Imagenet. What has the field discovered in the five subsequent years? Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.
Article
Experience with spoken language starts prenatally, as hearing becomes operational during the second half of gestation. While maternal tissues filter out many aspects of speech, they readily transmit speech prosody and rhythm. These properties of the speech signal then play a central role in early language acquisition. In this study, we ask how the newborn brain uses variation in duration, pitch and intensity (the three acoustic cues that carry prosodic information in speech) to group sounds. In four near-infrared spectroscopy studies (NIRS), we demonstrate that perceptual biases governing how sound sequences are perceived and organized are present in newborns from monolingual and bilingual language backgrounds. Importantly, however, these prosodic biases are present only for acoustic patterns found in the prosody of their native languages. These findings advance our understanding of how prenatal language experience lays the foundations for language development.
Book
The coming of language occurs at about the same age in every healthy child throughout the world, strongly supporting the concept that genetically determined processes of maturation, rather than environmental influences, underlie capacity for speech and verbal understanding. Dr. Lenneberg points out the implications of this concept for the therapeutic and educational approach to children with hearing or speech deficits.
Article
Several theories have stressed the importance of intersensory integration for development but have not identified specific underlying integration mechanisms. The author reviews and synthesizes current knowledge about the development of intersensory temporal perception and offers a theoretical model based on epigenetic systems theory, proposing that responsiveness to 4 basic features of multimodal temporal experience-temporal synchrony, duration, temporal rate, and rhythm-emerges in a sequential, hierarchical fashion. The model postulates that initial developmental limitations make intersensory synchrony the basis for the integration of intersensory temporal relations and that the emergence of responsiveness to the other, increasingly more complex, temporal relations occurs in a hierarchical, sequential fashion by building on the previously acquired intersensory temporal processing skills.
Article
Previous work in which we compared English infants, English adults, and Hindi adults on their ability to discriminate two pairs of Hindi (non-English) speech contrasts has indicated that infants discriminate speech sounds according to phonetic category without prior specific language experience (Werker, Gilbert, Humphrey, & Tees, 1981), whereas adults and children as young as age 4 (Werker & Tees, in press), may lose this ability as a function of age and or linguistic experience. The present work was designed to (a) determine the generalizability of such a decline by comparing adult English, adult Salish, and English infant subjects on their perception of a new non-English (Salish) speech contrast, and (b) delineate the time course of the developmental decline in this ability. The results of these experiments replicate our original findings by showing that infants can discriminate non-native speech contrasts without relevant experience, and that there is a decline in this ability during ontogeny. Furthermore, data from both cross-sectional and longitudinal studies shows that this decline occurs within the first year of life, and that it is a function of specific language experience. © 2002 Published by Elsevier Science Inc.
Article
This book addresses fundamental questions about how humans acquired language and how language evolved with new and compelling arguments. The book spans an extensive range of different scientific disciplines, including anthropology, archaeology, biology, cognitive science, computational linguistics, linguistics, neurophysiology, neuropsychology, neuroscience, philosophy, primatology, psycholinguistics, and psychology. It provides up-to-date perspectives on language evolution in as non-technical a way as possible without overly simplifying the issues. Topics range from language as an adaptation to the cognitive niche, to linguistics and what it can tell about the origins of language, universal grammar and semiotic constraints, the different origins of symbols and grammar, archaeological evidence of language origins, human components of the language faculty, neural basis for language readiness, gestural origins of language, the gestural origin of discrete infinity, motor control and speech, language learning, and grammatical assimilation.
Chapter
Developmental theories of face perception and speech perception have similar goals. Theorists in both domains seek to explain infants’ early sophistication with regard to the detection and/or discrimination of facial and speech stimuli and to determine whether infants’ early abilities are due to mechanisms dedicated to the processing of specific biologically relevant stimuli or more general sensory/cognitive mechanisms. In addition, theorists in both domains seek to explain how experience with specific faces and speech sounds modifies infants’ perception. In this chapter, studies showing enhanced discriminability at phonetic boundaries, as well as studies on the perception of phonetic prototypes, exceptionally good instances representing the centers of phonetic categories, are described. The studies show that although phonetic boundary effects are common to monkey and man, prototype effects are not. For human listeners prototypes play a unique role in speech perception. They function like “perceptual magnets, ” attracting nearby members of the category. By 6 months of age the prototype’s perceptual magnet effect is language-specific. Exposure to a specific language thus alters infants’ perception prior to the acquisition of word meaning and linguistic contrast. These results support a new theory, the Native Language Magnet (NLM) theory, which describes how innate factors and early experience with language interact in the development of speech perception.
Article
In normal conversation, speakers translate thoughts into words at high speed. To enable this speed, the retrieval of distinct types of linguistic knowledge has to be orchestrated with millisecond precision. The nature of this orchestration is still largely unknown. This report presents dynamic measures of the real-time activation of two basic types of linguistic knowledge, syntax and phonology. Electrophysiological data demonstrate that during noun-phrase production speakers retrieve the syntactic gender of a noun before its abstract phonological properties. This two-step process operates at high speed: the data show that phonological information is already available 40 milliseconds after syntactic properties have been retrieved.
Article
A fundamental task of language acquisition is to extract abstract algebraic rules. Three experiments show that 7-month-old infants attend longer to sentences with unfamiliar structures than to sentences with familiar structures. The design of the artificial language task used in these experiments ensured that this discrimination could not be performed by counting, by a system that is sensitive only to transitional probabilities, or by a popular class of simple neural network models. Instead, these results suggest that infants can represent, extract, and generalize abstract algebraic rules.
Article
This paper discusses the linguistic development of Genie, an adolescent girl who for most of her life underwent a degree of social isolation and experiential deprivation unparalleled in the reports of scientific investigation. This case touches on questions of profound interest to psychologists, philosophers, and linguists, including the relationship between cognition and language, the interdependence or autonomy of linguistic competence and performance, the mental abilities underlying language, proposed universal stages in language learning, the critical age for language acquisition, and the biological foundations of language.
Article
The nature of the mental lexicon, Edwin Williams and Beth Levin discovering the word units, Anne Cutler et al categorizing the world, Susan Carey and Frank C. Keil categories, words and language, Ellen M. Markman et al the case of verbs, Cynthia Fischer, D. Geoffrey Hall et al procedures for verb learning, Michael R. Brent et al.
Article
A continuing debate in language acquisition research is whether there are critical periods (CPs) in development during which the system is most responsive to environmental input. Recent advances in neurobiology provide a mechanistic explanation of CPs, with the balance between excitatory and inhibitory processes establishing the onset and molecular brakes establishing the offset of windows of plasticity. In this article, we review the literature on human speech perception development within the context of this CP model, highlighting research that reveals the interplay of maturational and experiential influences at key junctures in development and presenting paradigmatic examples testing CP models in human subjects. We conclude with a discussion of how a mechanistic understanding of CP processes changes the nature of the debate: The question no longer is, "Are there CPs?" but rather what processes open them, keeps them open, closes them, and allows them to be reopened. Expected final online publication date for the Annual Review of Psychology Volume 66 is November 30, 2014. Please see http://www.annualreviews.org/catalog/pubdates.aspx for revised estimates.
Article
New tools and new ideas have changed how we think about the neurobiological foundations of speech and language processing. This perspective focuses on two areas of progress. First, focusing on spatial organization in the human brain, the revised functional anatomy for speech and language is discussed. The complexity of the network organization undermines the well-regarded classical model and suggests looking for more granular computational primitives, motivated both by linguistic theory and neural circuitry. Second, focusing on recent work on temporal organization, a potential role of cortical oscillations for speech processing is outlined. Such an implementational-level mechanism suggests one way to deal with the computational challenge of segmenting natural speech.
Article
In this article, we begin with a summary of the evidence for perceptual narrowing for various aspects of language (e.g., vowel and consonant contrasts, tone languages, visual language, sign language) and of faces (e.g., own species, own race). We then consider possible reasons for the apparent differences in the timing of narrowing (e.g., apparently earlier for own race than for own species). Throughout we consider whether the evidence fits a model of maintenance/loss or is better characterized as enhancement/attunement to exposed categories. Finally, we consider evidence on the malleability of the timing and its implications for the role of endogenous factors versus learning in controlling when narrowing occurs. Overall, the comparison across domains revealed many similarities but also striking differences which lead to suggestions for future research. © 2013 Wiley Periodicals, Inc. Dev Psychobiol 56: 154-178, 2014.
Article
The critical period hypothesis holds that first language acquisition must occur before cerebral lateralization is complete, at about the age of puberty. One prediction of this hypothesis is that second language acquisition will be relatively fast, successful, and qualitatively similar to first language only if it occurs before the age of puberty. This prediction was tested by studying longitudinally the naturalistic acquisition of Dutch by English speakers of different ages. The subjects were tested 3 times during their first year in Holland, with an extensive test battery designed to assess several aspects of their second language ability. It was found that the subjects in the age groups 12-15 and adult made the fastest progress during the first few months of learning Dutch and that at the end of the first year the 8-10 and 12-15-year-olds had achieved the best control of Dutch. The 3-5-year-olds scored lowest on all the tests employed. These data do not support the critical period hypothesis for language acquisition.
Article
We propose that infants may learn about the relative order of heads and complements in their language before they know many words, on the basis of prosodic information (relative prominence within phonological phrases). We present experimental evidence that 6-12-week-old infants can discriminate two languages that differ in their head direction and its prosodic correlate, but have otherwise similar phonological properties (i.e. French and Turkish). This result supports the hypothesis that infants may use this kind of prosodic information to bootstrap their acquisition of word order.
Article
This paper reports on the results of a detailed empirical study of word order correlations, based on a sample of 625 languages. The primary result is a determination of exactly what pairs of elements correlate in order with the verb and object. Some pairs of elements that have been claimed to correlate in order with the verb and object do not in fact exhibit any correlation. I argue against the Head-Dependent Theory (HDT), according to which the correlations reflect a tendency towards consistent ordering of heads and dependents. I offer an alternative account, the Branching Direction Theory (BDT), based on consistent ordering of phrasal and nonphrasal elements. According to the BDT, the word order correlations reflect a tendency for languages to be consistently right-branching or consistently left-branching.*
Article
A fundamental goal of linguistic theory is to account for language acquisition. At the heart of the problem is the poverty of the stimulus, which underdetermines the hypotheses that children formulate. Generative grammar proposes that the form for expressing rules is innately constrained, and one putative constraint is structure-dependence. The present study subjected this proposal to an empirical test. In the first experiment, yes/no questions-amenable in principle to both structure-dependent and structure-independent analyses-were elicited from thirty 3- to 5-year-old children. A second experiment explored the nature of children's errors in Experiment 1. A third experiment contrasted a structurally-based account of the acquisition of interrogatives with one based on semantic generalization. The results of these experiments support Chomsky's contention that children unerringly hypothesize structure-dependent rules. Moreover, it was found that the rules which children invoke are formally insensitive to the semantic properties of noun phrases-a finding that supports the developmental autonomy of syntax.
Article
This article examines a type of argument for linguistic nativism that takes the following form: (i) a fact about some natural language is exhibited that al- legedly could not be learned from experience without access to a certain kind of (positive) data; (ii) it is claimed that data of the type in question are not found in normal linguistic experience; hence (iii) it is concluded that people cannot be learning the language from mere exposure to language use. We ana- lyze the components of this sort of argument carefully, and examine four exem- plars, none of which hold up. We conclude that linguists have some additional work to do if they wish to sustain their claims about having provided support for linguistic nativism, and we offer some reasons for thinking that the relevant kind of future work on this issue is likely to further undermine the linguistic nativist position.
Article
Investigated the effects of changing various aspects of simple artificial languages during a learning task. Changes in the syntactic structure of the stimulus items seriously interfered with Ss' (3 male undergraduates) memory performance on a transfer task. Changes in the explicit symbols used to make up the stimulus items produced little or no interference. It was argued that Ss learn artificial languages by learning the abstract structure of the "sentences" in the language rather than by learning to string together explicit symbols.
Article
A fundamental goal of linguistic theory is to explain how natural languages are acquired. This paper describes some recent findings on how learners acquire syntactic knowledge for which there is little, if any, decisive evidence from the environment. The first section presents several general observations about language acquisition that linguistic theory has tried to explain and discusses the thesis that certain linguistic properties are innate because they appear universally and in the absence of corresponding experience. A third diagnostic for innateness, early emergence, is the focus of the second section of the paper, in which linguistic theory is tested against recent experimental evidence on children's acquisition of syntax.