Chapter

Language Development

Chapter

Language Development

If you want to read the PDF, try requesting it from the authors.

Abstract

Typically developing children will rapidly and comprehensively master at least one of the more than 6,000 languages that exist around the globe. The complexity of these language systems and the speed and apparent facility with which children master them have been the topic of philosophical and scientific speculation for millennia. In 397 ad, in reflecting on his own acquisition of language, St. Augustine wrote “… as I heard words repeatedly used in their proper places in various sentences, I gradually learnt to understand what objects they signified; and after I had trained my mouth to form these signs, I used them to express my own desires” (quoted in Wittgenstein, 1953/2001). St. Augustine’s intuitions notwithstanding, more recent thinking and research on children’s language acquisition suggest that the problem facing a child is much more intricate than simply remembering the association between a sound and an object and learning to reproduce the word’s sound. The rich and multitiered nature of this problem—and the many and varied paths to its solution (Bates, Bretherton, & Snyder, 1988)—make the process of language acquisition a unique window into multiple low-level and high-level developmental processes.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Exploring the many paths of language can reveal insights into the development of more general cognitive functions. Language development studies have been especially helpful in understanding the emergence of function specialization as well as the scale and flexibility of cognitive processes during learning [1]. Language learning is also robust in the face of various biological deviations from the norm, which modify how youngsters interpret whatever input they get [2]. ...
Article
English language learning is a complex process. In order for students to learn and acquire a language, various learning opportunities and strategies must be provided by the teacher to create meaningful interactions using the target language. Traditionally, teachers use text books and other printed materials to teach the various language or communication skills. In the advent of technology, teachers learn to integrate varied technologies to successfully teach English in their respective classes. This paper presents the various technology tools that teachers could use in teaching English—whether as a second language or as a foreign language. It also discusses how the macro skills—listening, speaking, reading and writing—could be taught with the integration of available technologies. There are a number of advantages in using technology in language teaching; however, there are also challenges. This paper also considers and lays down the roadblocks that teachers may encounter when integrating technology in their English language classes. Overall, this paper offers a perspective on how technology could be used as an effective tool in teaching English.
Chapter
Full-text available
Chapter
Full-text available
Strong correspondences have been found between symbol development in play and language in three aspects: (1) correlations in frequency and rate, (2) overlap in referential content, and (3) parallels in qualitative levels and sequences of development. Within language, a developmental progression has been found from what is termed “nonreferential” to “referential” uses of words. The nonreferential words are not names for actions or entities; rather, they are procedures that are used in restricted contexts that may include particular actions or entities. At each developmental level, the difference between substantives and function words has to do with the kinds of referents—entities, events, relationships—involved in that language game or procedure. However, both kinds of words are in themselves functions. The field of child language research has been divided regarding: (1) the developmental levels of word use—that is, in terms of contextual freedom, (2) the kinds of features that predominate in the rules for using words, (3) the structure of the categories that underlie word use, and (4) individual differences in the things that children want to accomplish with words.
Article
Full-text available
Differences among languages offer a way of studying the process of infant adaptation from broad initial capacities to language-specific phonetic production. We designed analyses of the distribution of consonantal place and manner categories in French, English, Japanese, and Swedish to determine (1) whether systematic differences can be found in the babbling and first words of infants from different language backgrounds, and, if so, (2) whether these differences are related to the phonetic structure of the language spoken in the environment. Five infants from each linguistic environment were recorded under similar conditions from babbling only to the production of 25 words in a session. Although all of the infants generally made greater use of labials, dentals, and stops than of other classes of sounds, a clear phonetic selection could already be discerned in babbling, leading to statistically significant differences among the groups. This selection can be seen to arise from phonetic patterns of the ambient language. Comparison of the babbling and infant word repertoires reveals differences reflecting the motoric consequences of sequencing constraints.
Article
Full-text available
In adults, patterns of neural activation associated with perhaps the most basic language skill—overt object naming—are extensively modulated by the psycholinguistic and visual complexity of the stimuli. Do children's brains react similarly when confronted with increasing processing demands, or they solve this problem in a different way? Here we scanned 37 children aged 7–13 and 19 young adults who performed a well-normed picture-naming task with 3 levels of difficulty. While neural organization for naming was largely similar in childhood and adulthood, adults had greater activation in all naming conditions over inferior temporal gyri and superior temporal gyri/supramarginal gyri. Manipulating naming complexity affected adults and children quite differently: neural activation, especially over the dorsolateral prefrontal cortex, showed complexity-dependent increases in adults, but complexity-dependent decreases in children. These represent fundamentally different responses to the linguistic and conceptual challenges of a simple naming task that makes no demands on literacy or metalinguistics. We discuss how these neural differences might result from different cognitive strategies used by adults and children during lexical retrieval/production as well as developmental changes in brain structure and functional connectivity.
Article
Full-text available
The early relationship between children’s emerging articulatory abilities and their capacity to process speech input was investigated, following recent studies with English-learning infants. Twenty-six monolingual Italian-learning infants were tested at 6 months (no consistent and stable use of consonants, or vocal motor schemes [VMS]) and at the age at which they displayed use of at least one VMS. Perceptual testing was based on lists of nonwords containing one of three categories of sounds each: produced by infant (own VMS), not yet produced but typical of that age (other VMS), or not typically produced by infants at that age (non-VMS). In addition, size of expressive lexicon at 12 months and 18 months was assessed using an Italian version of the MacArthur-Bates Communicative Development Inventory (CDI). The results confirmed a relation between infant preverbal production and attentional response to VMS and also between age at first VMS and 12-month vocabulary. Maternal input is shown not to be a specific determinant of individual infant production preferences. A comparison between the English and Italian experimental findings shows a stronger attentional response to VMS in isolated words as compared to sentences. These results confirm the existence of an interaction between perception and production that helps to shape the way that language develops.
Article
Full-text available
Pronouncing a novel word for the first time requires the transformation of a newly encoded speech signal into a series of coordinated, exquisitely timed oromotor movements. Individual differences in children's ability to repeat novel nonwords are associated with vocabulary development and later literacy. Nonword repetition (NWR) is often used to test clinical populations. While phonological/auditory memory contributions to learning and pronouncing nonwords have been extensively studied, much less is known about the contribution of children's oromotor skills to this process. Two independent cohorts of children (7-13 years, N = 40, and 6.9-7.7 years, N = 37) were tested on a battery of linguistic and non-linguistic tests, including NWR and oromotor tasks. In both cohorts, individual differences in oromotor control were a significant contributor to NWR abilities; moreover, in an omnibus analysis including experimental and standardized tasks, oromotor control predicted the most unique variance in NWR. Results indicate that nonlinguistic oromotor skills contribute to children's NWR ability, and suggest that important aspects of language learning and consequent language deficits may be rooted in the ability to perform complex sensorimotor transformations.
Article
Full-text available
The purpose of this research was to replicate and extend some of the findings of Hart and Risley using automatic speech processing instead of human transcription of language samples. The long-term goal of this work is to make the current approach to speech processing possible by researchers and clinicians working on a daily basis with families and young children. Twelve hour-long, digital audio recordings were obtained repeatedly in the homes of middle to upper SES families for a sample of typically developing infants and toddlers (N = 30). These recordings were processed automatically using a measurement framework based on the work of Hart and Risley. Like Hart and Risley, the current findings indicated vast differences in individual children’s home language environments (i.e., adult word count), children’s vocalizations, and conversational turns. Automated processing compared favorably to the original Hart and Risley estimates that were based on transcription. Adding to Hart and Risley’s findings were new descriptions of patterns of daily talk and relationships to widely used outcome measures, among others. Implications for research and practice are discussed.
Article
Full-text available
Early language skills vary considerably across children, especially before the age of about two years. Thus, it can be difficult to distinguish between `late bloomers' and children who show a language delay or impairment. Here we present the results of a longitudinal study wherein toddlers' performance on a looking-time-based `Switch' task of word-object association (Stager & Werker, 1997) was related to the children's later language skills. Word-object association performance at 17 or 20 months was significantly related to scores on some standardized tests of language comprehension and production up to two and a half years later. The implications of these results for further early identification research are discussed.
Article
Full-text available
We propose that the crucial difference between human cognition and that of other species is the ability to participate with others in collaborative activities with shared goals and intentions: shared intentionality. Participation in such activities requires not only especially powerful forms of intention reading and cultural learning, but also a unique motivation to share psychological states with others and unique forms of cognitive representation for doing so. The result of participating in these activities is species-unique forms of cultural cognition and evolution, enabling everything from the creation and use of linguistic symbols to the construction of social norms and individual beliefs to the establishment of social institutions. In support of this proposal we argue and present evidence that great apes (and some children with autism) understand the basics of intentional action, but they still do not participate in activities involving joint intentions and attention (shared intentionality). Human children's skills of shared intentionality develop gradually during the first 14 months of life as two ontogenetic pathways intertwine: (1) the general ape line of understanding others as animate, goal-directed, and intentional agents; and (2) a species-unique motivation to share emotions, experience, and activities with other persons. The developmental outcome is children's ability to construct dialogic cognitive representations, which enable them to participate in earnest in the collectivity that is human cognition.
Article
Full-text available
In contrast to vision, where retinotopic mapping alone can define areal borders, primary auditory areas such as A1 are best delineated by combining in vivo tonotopic mapping with postmortem cyto- or myeloarchitectonics from the same individual. We combined high-resolution (800 μm) quantitative T(1) mapping with phase-encoded tonotopic methods to map primary auditory areas (A1 and R) within the "auditory core" of human volunteers. We first quantitatively characterize the highly myelinated auditory core in terms of shape, area, cortical depth profile, and position, with our data showing considerable correspondence to postmortem myeloarchitectonic studies, both in cross-participant averages and in individuals. The core region contains two "mirror-image" tonotopic maps oriented along the same axis as observed in macaque and owl monkey. We suggest that these two maps within the core are the human analogs of primate auditory areas A1 and R. The core occupies a much smaller portion of tonotopically organized cortex on the superior temporal plane and gyrus than is generally supposed. The multimodal approach to defining the auditory core will facilitate investigations of structure-function relationships, comparative neuroanatomical studies, and promises new biomarkers for diagnosis and clinical studies.
Article
Full-text available
Reviews literature on differences in characteristics of language development. Some children have been found to emphasize single words, simple productive rules for combining words, nouns and noun phrases, and referential functions; others use whole phrases and formulas, pronouns, compressed sentences, and expressive or social functions. The evidence for 2 styles of acquisition and their continuity over time is examined. Explanations in terms of hemispheric functions, cognitive maturation, cognitive style, and environmental context are considered, and an explanation in terms of the interaction of individual and environment in different functional contexts is suggested. Implications for development and the mastery of complex systems are discussed. (51 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
We combined quantitative relaxation rate (R1= 1/T1) mapping—to measure local myelination—with fMRI-based retinotopy. Gray–white and pial surfaces were reconstructed and used to sample R1 at different cortical depths. Like myelination, R1 decreased from deeper to superficial layers. R1 decreased passing from V1 and MT, to immediately surrounding areas, then to the angular gyrus. High R1 was correlated across the cortex with convex local curvature so the data was first “de-curved”. By overlaying R1 and retinotopic maps, we found that many visual area borders were associated with significant R1 increases including V1, V3A, MT, V6, V6A, V8/VO1, FST, and VIP. Surprisingly, retinotopic MT occupied only the posterior portion of an oval-shaped lateral occipital R1 maximum. R1 maps were reproducible within individuals and comparable between subjects without intensity normalization, enabling multi-center studies of development, aging, and disease progression, and structure/function mapping in other modalities.
Article
Full-text available
In this article, we present a summary of recent research linking speech perception in infancy to later language development, as well as a new empirical study examin-ing that linkage. Infant phonetic discrimination is initially language universal, but a decline in phonetic discrimination occurs for nonnative phonemes by the end of the 1st year. Exploiting this transition in phonetic perception between 6 and 12 months of age, we tested the hypothesis that the decline in nonnative phonetic dis-crimination is associated with native-language phonetic learning. Using a standard behavioral measure of speech discrimination in infants at 7 months and measures of their language abilities at 14, 18, 24, and 30 months, we show (a) a negative cor-relation between infants' early native and nonnative phonetic discrimination skills and (b) that native-and nonnative-phonetic discrimination skills at 7 months differ-entially predict future language ability. Better native-language discrimination at 7 months predicts accelerated later language abilities, whereas better nonnative-lan-guage discrimination at 7 months predicts reduced later language abilities. The dis-cussion focuses on (a) the theoretical connection between speech perception and language development and (b) the implications of these findings for the putative "critical period" for phonetic learning. Work in my laboratory has recently been focused on two fundamental questions and their theoretical intersect. The first is the role that infant speech perception plays in the acquisition of language. The second is whether early speech percep-tion can reveal the mechanism underlying the putative "critical period" in language acquisition.
Article
We combined quantitative relaxation rate (R1= 1/T1) mapping-to measure local myelination-with fMRI-based retinotopy. Gray-white and pial surfaces were reconstructed and used to sample R1 at different cortical depths. Like myelination, R1 decreased from deeper to superficial layers. R1 decreased passing from V1 and MT, to immediately surrounding areas, then to the angular gyrus. High R1 was correlated across the cortex with convex local curvature so the data was first "de-curved". By overlaying R1 and retinotopic maps, we found that many visual area borders were associated with significant R1 increases including V1, V3A, MT, V6, V6A, V8/VO1, FST, and VIP. Surprisingly, retinotopic MT occupied only the posterior portion of an oval-shaped lateral occipital R1 maximum. R1 maps were reproducible within individuals and comparable between subjects without intensity normalization, enabling multi-center studies of development, aging, and disease progression, and structure/function mapping in other modalities.
Article
Summary Although aphasia is often characterized as a selective impairment in language function, left hemisphere lesions may cause impairments in semantic processing of auditory information, not only in verbal but also in nonverbal domains. We assessed the ‘online’ relationship between verbal and nonverbal auditory processing by examining the ability of 30 left hemisphere-damaged aphasic patients to match environmental sounds and linguistic phrases to corresponding pictures. The verbal and nonverbal task components were matched carefully through a norming study; 21 age-matched controls and five right hemisphere-damaged patients were also tested to provide further reference points. We found that, while the aphasic groups were impaired relative to normal controls, they were impaired to the same extent in both domains, with accuracy and reaction time for verbal and nonverbal trials revealing unusually high correlations (r = 0.74 for accuracy, r = 0.95 for reaction time). Severely aphasic patients tended to perform worse in both domains, but lesion size did not correlate with performance. Lesion overlay analysis indicated that damage to posterior regions in the left middle and superior temporal gyri and to the inferior parietal lobe was a predictor of deficits in processing for both speech and environmental sounds. The lesion mapping and further statistical assessments reliably revealed a posterior superior temporal region (Wernicke’s area, traditionally considered a language-specific region) as being differentially more important for processing nonverbal sounds compared with verbal sounds. These results suggest that, in most cases, processing of meaningful verbal and nonverbal auditory information break down together in stroke and that subsequent recovery of function applies to both domains. This suggests that language shares neural resources with those used for processing information in other domains.
Article
Do the neural circuits that subserve language acquisition lose plasticity as they become tuned to the maternal language? We tested adult subjects born in Korea and adopted by French families in childhood; they have become fluent in their second language and report no conscious recollection of their native language. In behavioral tests assessing their memory for Korean, we found that they do not perform better than a control group of native French subjects who have never been exposed to Korean. We also used event-related functional magnetic resonance imaging to monitor cortical activations while the Korean adoptees and native French listened to sentences spoken in Korean, French and other, unknown, foreign languages. The adopted subjects did not show any specific activations to Korean stimuli relative to unknown languages. The areas activated more by French stimuli than by foreign stimuli were similar in the Korean adoptees and in the French native subjects, but with relatively larger extents of activation in the latter group. We discuss these data in light of the critical period hypothesis for language acquisition.
Book
This book investigates the nature of generalizations in language, drawing parallels between our linguistic knowledge and more general conceptual knowledge. The book combines theoretical, corpus, and experimental methodology to provide a constructionist account of how linguistic generalizations are learned, and how cross-linguistic and language-internal generalizations can be explained. Part I argues that broad generalizations involve the surface forms in language, and that much of our knowledge of language consists of a delicate balance of specific items and generalizations over those items. Part II addresses issues surrounding how and why generalizations are learned and how they are constrained. Part III demonstrates how independently needed pragmatic and cognitive processes can account for language-internal and cross-linguistic generalizations, without appeal to stipulations that are specific to language.
Article
Orienting biases for speech may provide a foundation for language development. Although human infants show a bias for listening to speech from birth, the relation of a speech bias to later language development has not been established. Here, we examine whether infants' attention to speech directly predicts expressive vocabulary. Infants listened to speech or non-speech in a preferential listening procedure. Results show that infants' attention to speech at 12 months significantly predicted expressive vocabulary at 18 months, while indices of general development did not. No predictive relationships were found for infants' attention to non-speech, or overall attention to sounds, suggesting that the relationship between speech and expressive vocabulary was not a function of infants' general attentiveness. Potentially ancient evolutionary perceptual capacities such as biases for conspecific vocalizations may provide a foundation for proficiency in formal systems such language, much like the approximate number sense may provide a foundation for formal mathematics.
Article
Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Article
The human brain is asymmetric in gross structure as well as functional organization. However, the developmental basis and trajectory of this asymmetry is unclear, and its relationship(s) to functional and cognitive development, especially language, remain to be fully elucidated. During infancy and early childhood, in concert with cortical gray matter growth, underlying axonal bundles become progressively myelinated. This myelination is critical for efficient and coherent interneuronal communication and, as revealed in animal studies, the degree of myelination changes in response to environment and neuronal activity. Using a novel quantitative magnetic resonance imaging method to investigate myelin content in vivo in human infants and young children, we investigated gross asymmetry of myelin in a large cohort of 108 typically developing children between 1 and 6 years of age, hypothesizing that asymmetry would predict language abilities in this cohort. While asymmetry of myelin content was evident in multiple cortical and subcortical regions, language ability was predicted only by leftward asymmetry of caudate and frontal cortex myelin content and rightward asymmetry in the extreme capsule. Importantly, the influence of this asymmetry was found to change with age, suggesting an age-specific influence of structure and myelin on language function. The relationship between language ability and asymmetry of myelin stabilized at ∼4 years, indicating anatomical evidence for a critical time during development before which environmental influence on cognition may be greatest.
Article
Typically developing infants differentiate strong-weak (trochaic) and weak-strong (iambic) stress patterns by 2months of age. The ability to discriminate rhythmical patterns, such as lexical stress, has been argued to facilitate language development, suggesting that a difficulty in discriminating stress might affect early word learning as reflected in vocabulary size. Children with autism spectrum disorder (ASD) often have difficulty in correctly producing lexical stress, yet little is known about how they perceive it. The current study tested 5-month-old infants with typically developing older siblings (SIBS-TD) and infants with an older sibling diagnosed with ASD (SIBS-A) on their ability to differentiate the trochaic and iambic stress patterns of the word form gaba. SIBS-TD infants showed an increased interest in attention to the trochaic stress pattern, which was also positively correlated with vocabulary comprehension at 12months of age. In contrast, SIBS-A infants attended equally to these stress patterns, although this was unrelated to later vocabulary size.
Article
We systematically compared fMRI results for covert (silent) and overt (spoken) versions of a language task in a representative sample of children with lesional focal epilepsy being considered for neurosurgical treatment (N=38, aged 6-17 years). The overt task was advantageous for presurgical fMRI assessments of language; it produced higher quality scans, was more sensitive for identifying activation in core language regions on an individual basis, and provided an online measure of performance crucial for improving the yield of presurgical fMRI.
Article
Recent research has documented systematic individual differences in early lexical development. The current study investigated the relation ship of these differences to differences in the way mothers and children regulate each other's attentional states. Mothers of 6 one-year-olds kept diary records and were videotaped with their children at monthly intervals as well. Language measures from the diary were related to measures of attention manipulation and maintenance derived from a coding of the videotaped interactions. Results showed that when mothers initiated interactions by directing their child's attention, rather than by following into it, their child learned fewer object labels and more personal-social words. Dyads who maintained sustained bouts of joint attentional focus had children with larger vocabularies overall. It was concluded that the way mothers and children regulate each other's attention is an important factor in children's early lexical development.
Article
In this paper, we will describe what are (in our view) the newest and most exciting trends in current research on language development; trends that are likely to predominate in the few years that remain until the millennium. The paper is organized into six sections: (1) advances in data sharing (including the Child Language Data Exchange System), (2) improved description and quantification of the linguistic data to which children are exposed and the data that they produce (with implications for theories of language learning); (3) new theories of learning in neural networks that challenge old assumptions about the "learnability" (or unlearnability) of language, (4) increased understanding of the nonlinear dynamics that may underlie behavioral change, (5) research on the neural correlates of language learning, and (6) an increased understanding of the social factors that influence normal and abnormal language development.
Article
The aim of this paper is to provide an overview of an emerging new framework for understanding early phonetic development—the Natural Referent Vowel (NRV) framework. The initial support for this framework was the finding that directional asymmetries occur often in infant vowel discrimination. The asymmetries point to an underlying perceptual bias favoring vowels that fall closer to the periphery of the F1/F2 vowel space. In Polka and Bohn (2003) we reviewed the data on asymmetries in infant vowel perception and proposed that certain vowels act as natural referent vowels and play an important role in shaping vowel perception. In this paper we review findings from studies of infant and adult vowel perception that emerged since Polka and Bohn (2003), from other labs and from our own work, and we formally introduce the NRV framework. We outline how this framework connects with linguistic typology and other models of speech perception and discuss the challenges and promise of NRV as a conceptual tool for advancing our understanding of phonetic development.
Article
Acknowledgments 1. Introduction 2. In the beginning was the verb 3. Methods and an introduction to T's language 4. Change of state verbs and sentences 5. Activity verbs and sentences 6. Other grammatical structures 7. The development of T's verb lexicon 8. The development of T's grammar 9. Language acquisition as cultural learning References Appendix Index.
Book
From Preface and Introduction (chapter 1): A few years ago, we published a book about the transition from gesture to the first word, trying to show how linguistic and nonlinguistic symbols emerge through the interaction of more primitive cognitive systems (Bates, Benigni, Bretherton, Camaioni, and Volterra, 1979). In this book we have moved a step further, tracking the passage from first words to grammar in another sample of healthy middle-class children. Once again, we have focused on the way a complex system emerges from simpler beginnings. In both works, we have argued that nature and children both create new machines out of old parts. The capacity to name things is built out of several nonlinguistic skills. The capacity to acquire grammar relies on a reworking of the same mechanisms that are used to build a lexicon. The emphasis in both cases is on continuity rather than discontinuity, construction rather than maturation. Because they are not uncontroversial, the logical, methodological, and empirical underpinnings of this work need to be spelled out in more detail before we can proceed. These are covered in Chapters 2 (modularity hypothesis), 3 (correlational research in language development), and 4 (empirical groundwork for our current research in a review of the literature on individual differences in language development). This will lead directly into an overview of the design of our longitudinal study. The next twelve chapters each contain one substudy within the structure of our longitudinal project. When this journey is complete, we will return to the issues outlined in Chapters 2 - 4, summarizing what individual differences in early language development have told us about language learning and the architecture of the Language Acquisition Device.
Article
Infants begin to segment words from fluent speech during the same time period that they learn phonetic categories. Segmented words can provide a potentially useful cue for phonetic learning, yet accounts of phonetic category acquisition typically ignore the contexts in which sounds appear. We present two experiments to show that, contrary to the assumption that phonetic learning occurs in isolation, learners are sensitive to the words in which sounds appear and can use this information to constrain their interpretation of phonetic variability. Experiment 1 shows that adults use word-level information in a phonetic category learning task, assigning acoustically similar vowels to different categories more often when those sounds consistently appear in different words. Experiment 2 demonstrates that 8-month-old infants similarly pay attention to word-level information and that this information affects how they treat phonetic contrasts. These findings suggest that phonetic category learning is a rich, interactive process that takes advantage of many different types of cues that are present in the input.
Article
Thirteen papers in this book illustrate MacWhinney and Bates's Competition Model (CM), with a focus on cross-linguistic processing. Studies in this volume show that (1) the CM is useful in predicting certain gross cross-linguistic differences of comprehension, particularly in relation to actor assignment and (2) children's processing strategies change over the years. (JP)
Article
Humans can see and name thousands of distinct object and action categories, so it is unlikely that each category is represented in a distinct brain area. A more efficient scheme would be to represent categories as locations in a continuous semantic space mapped smoothly across the cortical surface. To search for such a space, we used fMRI to measure human brain activity evoked by natural movies. We then used voxelwise models to examine the cortical representation of 1,705 object and action categories. The first few dimensions of the underlying semantic space were recovered from the fit models by principal components analysis. Projection of the recovered semantic space onto cortical flat maps shows that semantic selectivity is organized into smooth gradients that cover much of visual and nonvisual cortex. Furthermore, both the recovered semantic space and the cortical organization of the space are shared across different individuals. Video Abstract eyJraWQiOiI4ZjUxYWNhY2IzYjhiNjNlNzFlYmIzYWFmYTU5NmZmYyIsImFsZyI6IlJTMjU2In0.eyJzdWIiOiI3ZWNkYzY3ZDRjOTFhY2IyM2M3OTFiMTQ1NjBjODcxYyIsImtpZCI6IjhmNTFhY2FjYjNiOGI2M2U3MWViYjNhYWZhNTk2ZmZjIiwiZXhwIjoxNjA1OTg1OTk3fQ.TIfO0vZhS64vD6XcG7NrP4pNhAwy5ffRuUCy6rfglyVYgh132-tX0C-qXLM0hWFTshxonufyBgunY-95zDnSr25Wt4Hlhj8dUMCaT4-FGUKKlWG_9CmdIqZw768-OxcKUwY30gavdTbvrJMYWSVAh4CAItc182RqWtzJkhBJ3ppyVJg2QDgCOq9pVfmX55cDhs56RRa0l8iZJZV7E3usEYII46FHGoGK3NaE0czwZGS2hH4-UQsUAN7EXePId7dq0powuVh6oZzv9jzHgBbl0G43aItJb-EzdCjxvT1f9isAMSbxcv5FFXxyzMloSBMTjILvJTzC3XVLFZXOg9aePg (mp4, (74.34 MB) Download video
Article
A fundamental step in learning words is the development of an association between a sound pattern and an element in the environment. Here we explore the nature of this associative ability in 12-month-olds, examining whether it is constrained to privilege particular word forms over others. Forty-eight infants were presented with sets of novel English content-like word-object pairings (e.g. fep) or novel English function-like word-object (e.g. iv) pairings until they habituated. Results indicated that infants associated novel content-like words, but not the novel function-like words, with novel objects. These results demonstrate that the mechanism with which basic word-object associations are formed is remarkably sophisticated by the onset of productive language. That is, mere associative pairings are not sufficient to form mappings. Rather the system requires well-formed noun-like words to co-occur with objects in order for the linkages to arise.
Article
The timing and developmental factors underlying the establishment of language dominance are poorly understood. We investigated the degree of lateralization of traditional frontotemporal and modulatory prefrontal-cerebellar regions of the distributed language network in children (n = 57) ages 4 to 12-a critical period for language consolidation. We examined the relationship between the strength of language lateralization and neuropsychological measures and task performance. The fundamental language network is established by four with ongoing maturation of language functions as evidenced by strengthening of lateralization in the traditional frontotemporal language regions; temporal regions were strongly and consistently lateralized by age seven, while frontal regions had greater variability and were less strongly lateralized through age 10. In contrast, the modulatory prefrontal-cerebellar regions were the least strongly lateralized and degree of lateralization was not associated with age. Stronger core language skills were significantly correlated with greater right lateralization in the cerebellum. Hum Brain Mapp, 2012. © 2012 Wiley Periodicals, Inc.
Article
Before the end of the first year of life, infants begin to lose the ability to perceive distinctions between sounds that are not phonemic in their native language. It is typically assumed that this developmental change reflects the construction of language-specific phoneme categories, but how these categories are learned largely remains a mystery. Peperkamp, Le Calvez, Nadal, and Dupoux (2006) present an algorithm that can discover phonemes using the distributions of allophones as well as the phonetic properties of the allophones and their contexts. We show that a third type of information source, the occurrence of pairs of minimally differing word forms in speech heard by the infant, is also useful for learning phonemic categories and is in fact more reliable than purely distributional information in data containing a large number of allophones. In our model, learners build an approximation of the lexicon consisting of the high-frequency n-grams present in their speech input, allowing them to take advantage of top-down lexical information without needing to learn words. This may explain how infants have already begun to exhibit sensitivity to phonemic categories before they have a large receptive lexicon.
Chapter
The focus of this chapter is on how infants perceive, process, and learn from their auditory environments. We focus on mechanism of hearing, speech perception, and early language learning, with the goal of elucidating recent progress in this field and its historical context. The literature reviewed includes studies of hearing development in young infants, the beginnings of speech perception and tuning to the native language, word segmentation, word learning, phonological acquisition, and the early stages of language acquisition. Throughout, we focus on current controversies along with theoretical and methodological innovations. Keywords: hearing; infants; language; speech; words