Article

Exploring some edges: Chunk-and-Pass processing at the very beginning, across representations, and on to action

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

We identify three “working edges” for fruitful elaboration of the Chunk-and-Pass proposal: (a) accounting for the earliest phases of language acquisition, (b) explaining diversity in the stability and plasticity of different representational types, and (c) propelling investigation of action processing.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Similarly, chunking processes also appear to govern the segmentation of events. Language has been proposed as a form of action perception (Maier & Baldwin, 2016), which is also subject to the Now-or-Never bottleneck. Although chunking is understudied in other animal species, patterns of phrasal-level change in humpback whale song are found to be driven by cultural transmission in a similar manner as human language (Garland, Rendell, Lamoni, Poole, & Noad, 2017). ...
Article
Full-text available
Infants begin life ready to learn any of the world’s languages, but they quickly become speech-perception experts in their native language. Although this phenomenon has been well described, the mechanisms leading to native-language-listening expertise have not. In this article, we provide an in-depth review of one learning mechanism: distributional learning (DL), which has been shown to be important in phonetic category learning. DL is a domain-general statistical learning mechanism that involves tracking the relative frequency of phonetic tokens in speech input. Although DL is powerful, recent research has identified limitations to it as well. We conclude with a discussion of possible supplementary phonetic-learning mechanisms, which focuses on the surrounding context in which infants hear phonetic tokens and how it can augment DL and highlight important linguistic differences between perceptually similar stimuli.
Article
Full-text available
How do infants find the words in the tangle of speech that confronts them? The present study shows that by as early as 6 months of age, infants can already exploit highly familiar words-including, but not limited to, their own names-to segment and recognize adjoining, previously unfamiliar words from fluent speech. The head-turn preference procedure was used to familiarize babies with short passages in which a novel word was preceded by a familiar or a novel name. At test, babies recognized the word that followed the familiar name, but not the word that followed the novel name. This is the youngest age at which infants have been shown capable of segmenting fluent speech. Young infants have a powerful aid available to them for cracking the speech code. Their emerging familiarity with particular words, such as their own and other people's names, can provide initial anchors in the speech stream.
Article
Among the earliest and most frequent words that infants hear are their names. Yet little is known about when infants begin to recognize their own names. Using a modified version of the head-turn preference procedure, we tested whether 4.5-month-olds preferred to listen to their own names over foils that were either matched or mismatched for stress pattern. Our findings provide the first evidence that even these young infants recognize the sound patterns of their own names. Infants demonstrated significant preferences for their own names compared with foils that shared the same stress patterns, as well as foils with opposite patterns. The results indicate when infants begin to recognize sound patterns of items frequently uttered in the infants' environments.
Article
Changes in several postnatal maturational processes during neural development have been implicated as potential mechanisms underlying critical period phenomena. Lenneberg hypothesized that maturational processes similar to those that govern sensory and motor development may also constrain capabilities for normal language acquisition. Our goal, using a bilingual model, was to investigate the hypothesis that maturational constraints may have different effects upon the development of the functional specializations of distinct sub within language. Subjects were 61 adult Chinese/English bilinguals who were exposed to English at different points in development: 13, 46, 710, 1113, and after 16 years of age. Event-related brain potentials (ERPs) and behavioral responses were obtained as subjects read sentences that included semantic anomalies, three types of syntactic violations (phrase structure, specificity constraint, and subjacency constraint), and their controls. The accuracy in judging the grammaticality for the different types of syntactic rules and their associated ERPs was affected by delays in second language exposure as short as 13 years. By comparison the N400 response and the judgment accuracies in detecting semantic anomalies were altered only in subjects who were exposed to English after 1113 and 16 years of age, respectively. Further, the type of changes occurring in ERPs with delays in exposure were qualitatively different for semantic and syntactic processing. All groups displayed a significant N400 effect in response to semantic anomalies, however, the peak latencies of the N400 elicited in bilinguals who were exposed to English between 1113 and >16 years occurred later, suggesting a slight slowing in processing. For syntactic processing. the ERP differences associated with delays in exposure to English were observed in the morphology and distribution of components. Our findings are consistent with the view that maturational changes significantly constrain the development of the neural systems that are relevant for language and, further, that subsystems specialized for processing different aspects of language display different sensitive periods.
Article
Comprehending spoken words requires a lexicon of sound patterns and knowledge of their referents in the world. Tincoff and Jusczyk (1999) demonstrated that 6-month-olds link the sound patterns “Mommy” and “Daddy” to video images of their parents, but not to other adults. This finding suggests that comprehension emerges at this young age and might take the form of very specific word-world links, as in “Mommy” referring only to the infant’s mother and “Daddy” referring only to the infant’s father. The current study was designed to investigate if 6-month-olds also show evidence of comprehending words that can refer to categories of objects. The results show that 6-month-olds link the sound patterns “hand” and “feet” to videos of an adult’s hand and feet. This finding suggests that very early comprehension has a capacity beyond specific, one-to-one, associations. Future research will need to consider how developing categorization abilities, social experiences, and parent word use influence the beginnings of word comprehension.
Article
Previous studies of infants' comprehension of words estimated the onset of this ability at 9 months or later. However, these estimates were based on responses to names of relatively immobile, familiar objects. Comprehension of names referring to salient, animated figures (e.g., one's parents) may begin even earlier. In a test of this possibility, 6-month-olds were shown side-by-side videos of their parents while listening to the words "mommy" and "daddy." The infants looked significantly more at the video of the named parent. A second experiment revealed that infants do not associate these words with men and women in general. Infants shown videos of unfamiliar parents did not adjust their looking patterns in response to "mommy" and "daddy.".
Article
Infant phonetic perception reorganizes in accordance with the native language by 10 months of age. One mechanism that may underlie this perceptual change is distributional learning, a statistical analysis of the distributional frequency of speech sounds. Previous distributional learning studies have tested infants of 6–8 months, an age at which native phonetic categories have not yet developed. Here, three experiments test infants of 10 months to help illuminate perceptual ability following perceptual reorganization. English-learning infants did not change discrimination in response to nonnative speech sound distributions from either a voicing distinction (Experiment 1) or a place-of-articulation distinction (Experiment 2). In Experiment 3, familiarization to the place-of-articulation distinction was doubled to increase the amount of exposure, and in this case infants began discriminating the sounds. These results extend the processes of distributional learning to a new phonetic contrast, and reveal that at 10 months of age, distributional phonetic learning remains effective, but is more difficult than before perceptual reorganization.
Article
Lenneberg (1967) hypothesized that language could be acquired only within a critical period, extending from early infancy until puberty. In its basic form, the critical period hypothesis need only have consequences for first language acquisition. Nevertheless, it is essential to our understanding of the nature of the hypothesized critical period to determine whether or not it extends as well to second language acquisition. If so, it should be the case that young children are better second language learners than adults and should consequently reach higher levels of final proficiency in the second language. This prediction was tested by comparing the English proficiency attained by 46 native Korean or Chinese speakers who had arrived in the United States between the ages of 3 and 39, and who had lived in the United States between 3 and 26 years by the time of testing. These subjects were tested on a wide variety of structures of English grammar, using a grammaticality judgment task. Both correlational and t-test analyses demonstrated a clear and strong advantage for earlier arrivals over the later arrivals. Test performance was linearly related to age of arrival up to puberty; after puberty, performance was low but highly variable and unrelated to age of arrival. This age effect was shown not to be an inadvertent result of differences in amount of experience with English, motivation, self-consciousness, or American identification. The effect also appeared on every grammatical structure tested, although the structures varied markedly in the degree to which they were well mastered by later learners. The results support the conclusion that a critical period for language acquisition extends its effects to second language acquisition.
Article
For nearly two decades it has been known that infants' perception of speech sounds is affected by native language input during the first year of life. However, definitive evidence of a mechanism to explain these developmental changes in speech perception has remained elusive. The present study provides the first evidence for such a mechanism, showing that the statistical distribution of phonetic variation in the speech signal influences whether 6- and 8-month-old infants discriminate a pair of speech sounds. We familiarized infants with speech sounds from a phonetic continuum, exhibiting either a bimodal or unimodal frequency distribution. During the test phase, only infants in the bimodal condition discriminated tokens from the endpoints of the continuum. These results demonstrate that infants are sensitive to the statistical distribution of speech sounds in the input language, and that this sensitivity influences speech perception.
Article
Infants learn language with remarkable speed, but how they do it remains a mystery. New data show that infants use computational strategies to detect the statistical and prosodic patterns in language input, and that this leads to the discovery of phonemes and words. Social interaction with another human being affects speech learning in a way that resembles communicative learning in songbirds. The brain's commitment to the statistical and prosodic patterns that are experienced early in life might help to explain the long-standing puzzle of why infants are better language learners than adults. Successful learning by infants, as well as constraints on that learning, are changing theories of language acquisition.
Article
The nature and origin of the human capacity for acquiring language is not yet fully understood. Here we uncover early roots of this capacity by demonstrating that humans are born with a preference for listening to speech. Human neonates adjusted their high amplitude sucking to preferentially listen to speech, compared with complex non-speech analogues that controlled for critical spectral and temporal parameters of speech. These results support the hypothesis that human infants begin language acquisition with a bias for listening to speech. The implications of these results for language and communication development are discussed. For a commentary on this article see Rosen and Iverson (2007).