Article

Word-form familiarity bootstraps infant speech segmentation

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

At about 7 months of age, infants listen longer to sentences containing familiar words - but not deviant pronunciations of familiar words (Jusczyk & Aslin, 1995). This finding suggests that infants are able to segment familiar words from fluent speech and that they store words in sufficient phonological detail to recognize deviations from a familiar word. This finding does not examine whether it is, nevertheless, easier for infants to segment words from sentences when these words sound similar to familiar words. Across three experiments, the present study investigates whether familiarity with a word helps infants segment similar-sounding words from fluent speech and if they are able to discriminate these similar-sounding words from other words later on. Results suggest that word-form familiarity may be a powerful tool bootstrapping further lexical acquisition.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Given learners' demonstrable sensitivity to distribution of linguistic patterns, it follows that items appearing in speech with a higher frequency than others might have a particularly important role in language learningespecially since high-frequency items are more easily perceived than lower-frequency words of similar length (Morgan, Shi, & Allopenna, 1996;Zipf, 1935), and provide more reliable co-occurrence information than their less frequent counterparts (Monaghan, Chater, & Christiansen, 2005). Indeed, recent research has suggested that the presence of high-frequency words in speech may be advantageous for language acquisition, particularly for speech segmentation (Altvater-Mackensen & Mani, 2013;Bortfeld et al., 2005;Kurumada, Meylan, & Frank, 2013;Mersad, & Nazzi, 2012). ...
... We hypothesised that high-frequency words operating as markers to word boundaries might also assist with speech segmentation (Altvater-Mackensen & Mani, 2013;Bortfeld et al., 2005;Kurumada et al., 2013;Mersad & Nazzi, 2012). Additionally, we hypothesised that these marker words might also simultaneously constrain learning about the role of other words in the language by contributing to early formation of grammatical categories (Lany, 2014). ...
... This is especially noteworthy given the increased complexity of speech in the Markers condition (i.e., speech with multiple types of words, and words of different lengths). Participants' ability to recognise targets during testing in the absence of the marker words is consistent with prior demonstrations that high-frequency marker words can be used as anchor points for segmentation to occur around (Altvater-Mackensen & Mani, 2013;Bortfeld et al., 2005;Cunillera et al., 2010;Mersad, & Nazzi, 2012;. In this case, it is possible that the high frequency markers led to comparatively similar performance to the control group -despite the increased complexity of the signal. ...
Article
Full-text available
High frequency words have been suggested to benefit both speech segmentation and grammatical categorisation of the words around them. Despite utilising similar information, these tasks are usually investigated separately in studies examining learning. We determined whether including high frequency words in continuous speech could support categorisation when words are being segmented for the first time. We familiarised learners with continuous artificial speech comprising repetitions of target words, which were preceded by high-frequency marker words. Crucially, marker words distinguished targets into two distributionally-defined categories. We measured learning with segmentation and categorisation tests, and compared performance against a control group that heard the artificial speech without these marker words (i.e., just the targets, with no cues for categorisation). Participants segmented the target words from speech in both conditions, but critically when the marker words were present, they influenced acquisition of word-referent mappings in a subsequent transfer task, with participants demonstrating better early learning for mappings that were consistent (rather than inconsistent) with the distributional categories. We propose that high-frequency words may assist early grammatical categorisation, while speech segmentation is still being learned.
... Whereas Spanish-Catalan infants appear to be able to segment speech by as early as 6-months of age (Bosch, Figueras, Teixido, & Ramon-Casas, 2013, but see also Bortfeld, Morgan, Golinkoff, & Rathbun, 2005), infants of other languages seem to need additional cues in order to successfully segment speech at a similar age to American English infants (Jusczyk & Aslin, 1995). German infants, for instance, show successful segmentation of words from fluent speech only if they have previously been familiarized to similar sounding words (Altvater-Mackensen & Mani, 2013), been provided with extended exposure to these words in stories at home (Schreiner, Altvater-Mackensen, & Mani, 2016), or are exposed to the words in isolation first before hearing these words embedded in sentences (Höhle & Weissenborn, 2003). ...
... This has serious consequences for early language learning in German infants, suggesting that these infants may be unable to segment words from standard German IDS until at least around 9-months of age or -at the very least -may require more input in order to be able to segment words from fluent speech. This finding is in keeping with previous studies showing that, for instance, infants were only able to demonstrate segmentation of words from fluent speech if they had been previously familiarized with similar sounding words (Altvater- Mani, 2013) or that even older infants at 9 months of age were only able to segment words from fluent speech when they had been familiarized with these words over a 6-week-period at home (Schreiner, Altvater-Mackensen, & Mani, 2016). Taken together, these current findings suggest that German infants may require more exposure to the words to be segmented before they are able to segment words from fluent speech. ...
... Somewhat worryingly, we find that even using a sensitive electrophysiological measure, we do not find segmentation of standard German IDS. This finding is in keeping with previous studies showing that infants from some language backgrounds may need more cues to speech segmentation at early ages, including the use of an exaggerated register, greater exposure to the words to be learned, or to similar-sounding words (Altvater-Mackensen & Mani, 2013;Floccia et al., 2016, Schreiner, Altvater-Mackensen & Mani, 2016Schreiner & Mani, 2017). ...
Preprint
Across cultures, infants are typically addressed using a special speech register, called infant-directed speech (IDS). Infants appear to benefit from being addressed in this register, although there seem to be some cross-linguistic differences in their learning from IDS. One possible explanation for these differences is that children from different backgrounds are addressed in a less exaggerated register than standard American English IDS. Against this background, we examined whether German 7.5-month-olds are able to segment words from exaggerated IDS and standard German IDS. Furthermore, in order to evaluate the potential long-term consequences of continued exposure to less or more exaggerated IDS, we also examined the impact of individual differences in maternal IDS on early speech segmentation. We found that 7.5-month-old infants were able to segment words from exaggerated, but not from standard German IDS. A potential explanation for this result comes from our finding that infants whose mothers used more exaggerated IDS were able to also segment words from standard German IDS. Taken together, the current study a) underlines the importance of IDS in early language acquisition, b) demonstrates successful exaggerated IDS segmentation abilities, and c) presents a possible explanation for cross-linguistic differences in infants’ segmentation abilities reported in the literature.
... Familiarity with a word-form has been identified as one source of information infants use in detecting and segmenting individual words from continuous speech (Altvater-Mackensen & Mani, 2013). Here we extend this finding to examine whether prior familiarity with the phonological form and meaning of a word (based on natural language exposure) influences segmentation of words from fluent speech. ...
... Infants as early as 7.5 months of age, if not earlier (Altvater-Mackensen & Mani, 2013;Bortfeld, Morgan, Golinkoff, & Rathbun, 2005) have been shown to segment words from fluent speech streams (Jusczyk & Aslin, 1995) and are able to store these word-form representations in long-term memory such that they are able to recognize them later (Jusczyk & Hohne, 1997;Schreiner, Altvater-Mackensen, & Mani, 2016). ...
... Altvater-Mackensen and Mani (2013) show that prior exposure to words also helps infants segment similar-sounding words from fluent speech. In particular, novel words that are phonologically similar to previously familiarized word-forms are readily segmented from fluent speech by infants between 6 to 8 months of age (Altvater-Mackensen & Mani, 2013). ...
Preprint
Familiarity with a word-form has been identified as one source of information infants use in detecting and segmenting individual words from continuous speech (Altvater-Mackensen & Mani, 2013). Here we extend this finding to examine whether prior familiarity with the phonological form and meaning of a word (based on natural language exposure) influences segmentation of words from fluent speech. The current study tested infants’ segmentation of pseudo-words that sound similar to words infants were likely to already be familiar with, given their natural exposure to language at home. A word-comprehension task confirmed that infants were not only familiar with the chosen word-forms but also their meaning. Results of the segmentation task suggest that infants are able to use their knowledge of previously familiar words to segment similar-sounding words from the speech stream. Thus, word-form similarity based segmentation is a powerful mechanism that can drive infants’ rapid vocabulary growth. In addition, the study confirms that 7-month-old infants already comprehend simple frequent words and are able to distinguish them from subtle mispronunciations of these words.
... The fact that younger infants at 6 months of age did not show such a preference was initially taken to suggest that the ability to detect words in fluent speech develops around 7.5 months of age. However, more recent studies report segmentation success in different contexts at younger ages as well (Altvater-Mackensen & Mani, 2013;Johnson, Seidl, & Tyler, 2014;Shukla, White, & Aslin, 2011;Thiessen & Erickson, 2013), suggesting that the context in which segmentation abilities are tested is critical to segmentation success. ...
... In between trials, the screen remained blank. However, if infants lost interest and did not look back at the screen, the experimenter initiated a flashing light paired with the sound of a ringing bell to reorient infants toward the screen (see Altvater-Mackensen & Mani, 2013, for a similar procedure). ...
... We found that German 9-month-olds successfully recognized the words they had been exposed to previously at home-regardless of whether this exposure was in the infant-or adult-directed register. In contrast, infants did not recognize isolated tokens of words they were familiarized with in a brief laboratory-based exposure phase (for similar findings in German 7-month-olds see Altvater-Mackensen & Mani, 2013). In the following sections, we will examine the findings in more detail, outline future implications and address limitations of the present study. ...
Article
Full-text available
We examined 7.5-month-old infants' ability to segment words from infant- and adult-directed speech (IDS and ADS). In particular, we extended the standard design of most segmentation studies by including a phase where infants were repeatedly exposed to target word recordings at their own home (extended exposure) in addition to a laboratory-based familiarization. This enabled us to examine infants' segmentation of words from speech input in their naturalistic environment, extending current findings to learning outside the laboratory. Results of a modified preferential-listening task show that infants listened longer to isolated tokens of familiarized words from home relative to novel control words regardless of register. However, infants showed no recognition of words exposed to during purely laboratory-based familiarization. This indicates that infants succeed in retaining words in long-term memory following extended exposure and recognizing them later on with considerable flexibility. In addition, infants segmented words from both IDS and ADS, suggesting limited effects of speech register on learning from extended exposure in naturalistic environments. Moreover, there was a significant correlation between segmentation success and infants' attention to ADS, but not to IDS, during the extended exposure phase. This finding speaks to current language acquisition models assuming that infants' individual attention to language stimuli drives successful learning.
... The first year of life is devoted to ''cracking the speech code'' in Kuhl's apt description (Kuhl, 2004) and discovering and making a commitment to the sound structure of the native language. There is additional evidence indicating that word form is leading in early word learning: young children have been shown to learn words from larger phonological categories more easily than words belonging to smaller phonological categories (Newman et al., 2008;Altvater-Mackensen & Mani, 2013), suggestive that word form plays a facilitatory role in that process. From a developmental perspective, lexical knowledge measured as vocabulary size is a major predictor of other language competences and skills, such as e.g., grammar development (Bates & Goodman, 1997), oral and reading comprehension. ...
... This coactivation of words in the mental lexicon of toddlers appears to be primarily mediated by the phonological overlap between prime and target labels. In a visual world paradigm priming study of German speaking toddlers, Altvater-Mackensen and Mani (2013) manipulated the phonological similarity between prime and target word. Onset priming had an interference effect, while rhyme-priming had a facilitatory effect on target word recognition (e.g., Hund-Mund; Fisch-Tisch). ...
Article
Full-text available
Word learning requires successful pairing of form and meaning. A common hypothesis about the process of word learning is that initially, infants work on identifying the phonological segments corresponding to words (speech analysis), and subsequently map those segments onto meaning. A range of theories have been proposed to account for the underlying mechanisms and factors in this remarkable achievement. While some are mainly concerned with the sensorimotor affordances and perceptual properties of referents out in the world, other theories emphasize the importance of language as a system, and the relations among language units (other words or syntax). Recent approaches inspired by neuro-science suggest that the storage and processing of word meanings is supported by neural systems subserving both the representation of conceptual knowledge and its access and use (Lambon Ralph et al., Nature Reviews Neuroscience 18:42–55, 2017). Developmental disorders have been attested to impact on different aspects of word learning. While impaired word knowledge is not a hallmark of Autism Spectrum Disorder (ASD), and remains largely understudied in this population, there is evidence that there are, sometimes subtle, problems in that domain, reflected in both how such knowledge is acquired and how words are used (Vulchanova et al., Word knowledge and word usage: A cross-disciplinary guide to the mental lexicon, Mouton De Gruyter, 2020). In addition, experimental evidence suggests that children with autism present with specific problems in categorizing the referents of linguistic labels leading to subsequent problems with using those labels (Hartley and Allen, Autism 19:570–579, 2015). Furthermore, deficits have been reported in some of the underlying mechanisms, biases and use of cues in word learning, such as e.g., object shape (Field et al., Journal of Autism and Developmental Disorders 46:1210–1219, 2016; Tek et al., Autism Research 1:208–222, 2008). Finally, it is likely that symbol use might be impaired in ASD, however, the direction of the causal relationship between social and communication impairment in autism and symbolic skills is still an open question (Allen and Lewis, Journal of Autism and Developmental Disorders 45:1–3, 2015; Allen and Butler, British Journal of Developmental Psychology 38:345–362, 2020; Wainwright et al., Journal of Autism and Developmental Disorders 50:2941–2956, 2020). Further support for impaired symbol formation in autism comes from the well-attested problems with figurative, non-literal language use (e.g., metaphors, idioms, hyperbole, irony) (Vulchanova et al., Frontiers in Human Neuroscience 9:24, 2015). Here we propose that embodied theories of cognition which link perceptual experience with conceptual knowledge (see Eigsti, Frontiers in Psychology 4:224, 2013; Klin et al., Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 358:345–360, 2003) might be useful in explaining the difficulty in symbolic understanding that individuals with autism face during the word learning process.
... Zipfian distributions have been found to aid word segmentation in adult statistical learning studies, especially for larger lexica (Kurumada et al., 2013), presumably because highly frequent sequences enable rapid segmentation, which can act as anchors in subsequent utterances. This anchor effect has been found to benefit word WORD SEGMENTATION CUES IN GERMAN CHILD-DIRECTED SPEECH segmentation in infant (Altvater-Mackensen & Mani, 2013;Bortfeld et al., 2005;Mersad & Nazzi, 2012;Shi & Lepage, 2008) and adult learners (Cunillera et al., 2010;Valian & Coulson, 1988), and in recent work, Cunillera et al. (2016) documented the neural signature of this effectdemonstrating that anchor words elicited greater stimulus-preceding negativity (a marker of expectation) in adults' electroencephalography (EEG) data compared to less frequent words. ...
... In recent work, such distributions have been suggested to help speech segmentation (Kurumada et al., 2013). In terms of word frequency, highly frequent items have been proposed to aid segmentation by acting as anchor points for subsequent segmentation to occur around; these words are believed to undergo early extraction from the speech stream, before flagging the boundaries of the words they appear alongside in subsequent speech (Altvater-Mackensen & Mani, 2013;Bortfeld et al., 2005;Kurumada et al., 2013;Mersad & Nazzi, 2012;Monaghan & Christiansen, 2010;Shi & Lepage, 2008). The precise utility of Zipfian distributions among syllables and syllable structures remains to be WORD SEGMENTATION CUES IN GERMAN CHILD-DIRECTED SPEECH established; however, it is conceivable that these may serve segmentation in a similar way. ...
Article
Full-text available
To acquire language, infants must learn to segment words from running speech. A significant body of experimental research shows that infants use multiple cues to do so; however, little research has comprehensively examined the distribution of such cues in naturalistic speech. We conducted a comprehensive corpus analysis of German child-directed speech (CDS) using data from the Child Language Data Exchange System (CHILDES) database, investigating the availability of word stress, transitional probabilities (TPs), and lexical and sublexical frequencies as potential cues for word segmentation. Seven hours of data (~15,000 words) were coded, representing around an average day of speech to infants. The analysis revealed that for 97% of words, primary stress was carried by the initial syllable, implicating stress as a reliable cue to word onset in German CDS. Word identity was also marked by TPs between syllables, which were higher within than between words, and higher for backwards than forwards transitions. Words followed a Zipfian-like frequency distribution, and over two-thirds of words (78%) were monosyllabic. Of the 50 most frequent words, 82% were function words, which accounted for 47% of word tokens in the entire corpus. Finally, 15% of all utterances comprised single words. These results give rich novel insights into the availability of segmentation cues in German CDS, and support the possibility that infants draw on multiple converging cues to segment their input. The data, which we make openly available to the research community, will help guide future experimental investigations on this topic.
... K. Johnson & Jusczyk, 2001), or position-specific allophonic variants (Jusczyk, Hohne, et al., 1999). Familiarity with words (Altvater-Mackensen & Mani, 2013;Bortfeld et al., 2005;Sandoval & Gómez, 2016), positional factors, such as word occurring at utterance edges (E. K. Johnson et al., 2014;Seidl & Johnson, 2006), may further promote segmentation. ...
... van der Feest & Johnson, 2016;van Heugten & Johnson, 2014;White & Aslin, 2011). For infants under one year, however, to our knowledge, only two studies endeavoured to use more than two phases (see Altvater-Mackensen & Mani, 2013;Thiessen & Saffran, 2007 for studies with three phases with 7 and 9-montholds, respectively). We hence concentrated on the group for which the paradigm is more firmly established. ...
... It might, therefore, be that the absence of a difference between listening times to familiarized and control words at 7.5months of age is due to their not performing as required in the task. However, we note, that even 7-month-old German infants successfully discriminate between familiarized and control words in this task given additional familiarization input (Altvater-Mackensen & Mani, 2013). Thus, while we cannot exclude the possibility that the 7.5-month-olds in the current task were not, in general, performing as expected, it is unlikely that the lack of a significant difference in listening times to familiarized and control words is solely due to this factor. ...
... The results of the current study speak to the role of IDS as an attentional spotlight in speech processing (Kuhl, 2007;Zangl & Mills, 2007). In Altvater-Mackensen and Mani (2013), the ability to segment similar-sounding words from fluent speech was interpreted in terms of word-form familiarity bootstrapping segmentation. The similarity of the to-be-segmented words to the previously familiarized words captures infants' attention in the otherwise unfamiliar speech stream and drives segmentation. ...
Article
Full-text available
While American English infants typically segment words from fluent speech by 7.5-months, studies of infants from other language backgrounds have difficulty replicating this finding. One possible explanation for this cross-linguistic difference is that the input infants from different language backgrounds receive is not as infant-directed as American English infant-directed speech (Floccia et al., 2016). Against this background, the current study investigates whether German 7.5- and 9-month-old infants segment words from fluent speech when the input is prosodically similar to American English IDS. While 9-month-olds showed successful segmentation of words from exaggerated IDS, 7.5-month-olds did not. These findings highlight (a) the beneficial impact of exaggerated IDS on infant speech segmentation, (b) cross-linguistic differences in word segmentation that are based not just on the kind of input available to children and suggest (c) developmental differences in the role of IDS as an attentional spotlight in speech segmentation.
... Bearing in mind learners' aptitude for exploiting statistics, it follows that items appearing more frequently than others in speech might prove helpful for learning. Indeed, recent research has suggested that language acquisition may benefit from the presence of high-frequency words, with a variety of studies demonstrating that these may be advantageous for speech segmentation in particular (Altvater-Mackensen & Mani, 2013;Bortfeld, Morgan, Golinkoff, & Rathbun, 2005;Mersad, & Nazzi, 2012). ...
... These findings document a rare demonstration of adults' ability to use statistical information to segment continuous speech that contains words of varying length (see Johnson & Tyler, 2010). That participants were able to segment around the high frequency marker words supports the possibility that learners were able to use these as anchors for segmentation (Altvater-Mackensen & Mani, 2013;Mersad, & Nazzi, 2012;Monaghan & Christiansen, 2010), although in this instance they did not significantly boost segmentation compared to a stream containing just targets. Perhaps increasing pre-exposure frequency to these marker words would result in better performance (see, e.g., Bortfeld et al., 2010, for benefit of prior exposure). ...
Conference Paper
Full-text available
Recent studies suggest that high-frequency words may benefit speech segmentation (Bortfeld, Morgan, Golinkoff, & Rathbun, 2005) and grammatical categorisation (Monaghan, Christiansen, & Chater, 2007). To date, these tasks have been examined separately, but not together. We familiarised adults with continuous speech comprising repetitions of target words, and compared learning to a language in which targets appeared alongside high-frequency marker words. Marker words reliably preceded targets, and distinguished them into two otherwise unidentifiable categories. Participants completed a 2AFC segmentation test, and a similarity judgement categorisation test. We tested transfer to a word-picture mapping task, where words from each category were used either consistently or inconsistently to label actions/objects. Participants segmented the speech successfully, but only demonstrated effective categorisation when speech contained high-frequency marker words. The advantage of marker words extended to the early stages of the transfer task. Findings indicate the same high-frequency words may assist speech segmentation and grammatical categorisation.
... There were no other differences in the procedure across infants familiarized with IDS and ADS. Infants were tested using a variation of the Preferential Listening Procedure employed by Jusczyk and Aslin (1995) similar to Altvater-Mackensen and Mani (2013). The experiment was run using the Lincoln Infant Lab Package (Meints & Woodford, 2008). ...
... With regard to German infants, studies suggest that infants from German language backgrounds may require additional information to segment words from fluent speech at a similar age where American English infants successfully segment speech. Thus, Altvater-Mackensen and Mani (2013) find that German infants segment words from fluent speech if they have been given recent exposure to similar sounding words immediately prior to the familiarization phase. Similarly, Höhle and Weissenborn (2003) find that German infants listen longer to sentences containing words they had previously been presented with in isolation (but not the other way around) only at 9-months of age. ...
Article
One of the first challenges facing the young language learner is the task of segmenting words from a natural language speech stream, without prior knowledge of how these words sound. Studies with younger children find that children find it easier to segment words from fluent speech when the words are presented in infant-directed speech, i.e., the kind of speech typically directed toward infants, compared to adult-directed speech. The current study examines whether infants continue to display similar differences in their segmentation of infant- and adult-directed speech later in development. We show that 16-month-old infants successfully segment words from a natural language speech stream presented in the adult-directed register and recognize these words later when presented in isolation. Furthermore, there were no differences in infants’ ability to segment words from infant- and adult-directed speech at this age, although infants’ success at segmenting words from adult-directed speech correlated with their vocabulary size.
... First, when it is easier to predict upcoming elements, processing resources can be used more efficiently. In addition, high frequency words can be learned early on, and used to facilitate learning of lower frequency elements, as can be seen in infants use of their own name to segment adjacent words OPEN MIND: Discoveries in Cognitive Science (Bortfeld et al., 2005), or in the use of familiar words to segment novel phonologically similar words (Altvater-Mackensen & Mani, 2013). Finally, lower frequency words may also benefit from appearing next to high frequency ones: the contrast between high and low frequency could make the lower frequency words easier to identify or learn. ...
Article
Full-text available
Across languages, word frequency and rank follow a power law relation, forming a distribution known as the Zipfian distribution. There is growing experimental evidence that this well-studied phenomenon may be beneficial for language learning. However, most investigations of word distributions in natural language have focused on adult-to-adult speech: Zipf’s law has not been thoroughly evaluated in child-directed speech (CDS) across languages. If Zipfian distributions facilitate learning, they should also be found in CDS. At the same time, several unique properties of CDS may result in a less skewed distribution. Here, we examine the frequency distribution of words in CDS in three studies. We first show that CDS is Zipfian across 15 languages from seven language families. We then show that CDS is Zipfian from early on (six-months) and across development for five languages with sufficient longitudinal data. Finally, we show that the distribution holds across different parts of speech: Nouns, verbs, adjectives and prepositions follow a Zipfian distribution. Together, the results show that the input children hear is skewed in a particular way from early on, providing necessary (but not sufficient) support for the postulated learning advantage of such skew. They highlight the need to study skewed learning environments experimentally.
... • We define lexical curiosity as a child being more curious about words that are similar to previously learned words. • Learning of similar sounding and/or meaning words: similarity in either word form or word meaning boosts word learning [1,2,5,7] children find it difficult to learn words that overlap on multiple dimensions [3] earlier vocabularies are more systematic than later ones (age span 2-13+ years) [6] • Could lexical curiosity lead to preferential learning of systematic words? ...
Poster
Full-text available
The arbitrariness of the sign has been widely discussed since Saussure referred to it as one of language systems’ key properties in 1916. Yet, we still do not know the impact of arbitrariness on word acquisition. Similarity in either word form or word meaning boosts word learning, but children find it difficult to learn words that overlap on multiple dimensions. We know that earlier vocabularies are more systematic than later ones (age span 2-13+ years). In this poster, we present some insights into the systematicity of earlier learned words (< 3 years).
... Highly familiar words might function as lexical anchors in speech processing (Frost, Dunn, Christiansen, Gómez, & Monaghan, 2020;Frost, Monaghan, & Christiansen, 2019), binding infants' attention to the stimulus and motivating them to explore the signal even further. For example, 9-month-olds benefit in their word segmentation from words that overlap in rimes with highly familiar words (Altvater-Mackensen & Mani, 2013). The same rationale applies to sound sequences below the word unit: at 11-months-old, highly frequent phoneme patterns (that cross word boundaries) are differentiated from low-frequency patterns, indicating storage of words as well as non-words containing these highlyfrequent sound patterns in infants' proto-lexicons (Ngon et al., 2013). ...
... Indeed, studies suggest that children detect phonological (Mani & Plunkett, 2010, and semantic similarities between words (Arias-Trejo & Plunkett, 2009Mani, Durrant & Floccia, 2012;Altvater-Mackensen & Mani, 2013), as well as similarities based on visuoperceptual properties of word referents (Arias-Trejo & Plunkett, 2010;Johnson, McQueen & Huettig, 2011;Mani, Johnson, McQueen & Huettig, 2013;Bobb, Huettig & Mani, 2016) from early on. Thus, words appear to be organised according to their phonological, semantic and visuo-perceptual properties in the mental lexicon (see Mani & Borovsky, 2017). ...
Article
Studies on lexical development in young children often suggest that the organization of the early lexicon may vary with age and increasing vocabulary size. In the current study, we explicitly examined this suggestion in further detail using a longitudinal study of the development of phonological and semantic priming effects in the same group of toddlers at three different ages. In particular, our longitudinal design allows us to disentangle effects of increasing age and vocabulary size on priming and the extent to which vocabulary size may predict later priming effects. We tested phonological and semantic priming effects in monolingual German infants at 18, 21, and 24 months of age. We used the intermodal preferential looking paradigm combined with eye tracking to measure the influence of phonologically and semantic related/unrelated primes on target recognition. We found that phonological priming effects were predicted by participants’ current vocabulary size even after controlling for participants’ age and participants’ early vocabulary size. Semantic priming effects were, in contrast, not predicted by vocabulary size. Finally, we also found a relationship between early phonological priming effects and later semantic priming effects as well as between early semantic priming effects and later phonological priming effects, potentially suggesting (limited) consistency in lexical structure across development. Taken together, these results highlight the important role of vocabulary size in the development of priming effects in early childhood.
... We used a single-screen procedure, similar to previous adaptions of the Headturn Preference Procedure for word segmentation studies (Altvater-Mackensen & Mani, 2013, Thiessen & Erickson, 2013. Infants were seated on their parent's lap approximately 150cm away from the screen. ...
Article
Infants are sensitive to syllable co-occurrence probabilities when segmenting words from fluent speech. However, segmenting two languages overlapping at the syllabic level is challenging because the statistical cues across the languages are incongruent. Successful segmentation thus relies on infants' ability to separate language inputs and track the statistics of each language. Here, we report three experiments investigating how infants statistically segment words from two overlapping languages in a simulated language-mixing bilingual environment. In the first two experiments, we investigated whether 9.5-month-olds can use French and English phonetic markers to segment words from two overlapping artificial languages produced by one individual. After showing that infants could segment the languages when the languages were presented in isolation (Experiment 1), we presented infants with two interleaved languages differing in phonetic cues (Experiment 2). Both monolingual and bilingual infants successfully segmented words from one of the two languages-the language heard last during familiarization. In Experiment 3, a conceptual replication, we replicated the findings of Experiment 2 with a different population and with different cues. As before, when 12-month-old monolingual infants heard two interleaved languages differing in English and Finnish phonetic cues, they learned only the last language heard during familiarization. Together, our findings suggest that segmenting words in a language-mixing environment is challenging, but infants possess a nascent ability to recruit phonetic cues to segment words from one of two overlapping languages in a bilingual-like environment.
... This extends previous studies, which find that monolingual toddlers tolerate 1feature mispronunciations within the native-language phonology (White & Morgan, 2008), to recognition of words from an unknown, foreign language that are phonologically identical to or differ by 1-feature from targets in the native language. This suggests that new English words which overlap phonologically with their German labels may be a helpful aid for monolingual German toddlers as they begin to learn a second language, just as phonologically similar words boost word segmentation and learning within their own first language (Altvater-Mackensen & Mani, 2013;Newman et al., 2008). Both bilingual and monolingual toddlers exhibited recognition of identical words. ...
Article
We examined how L2 exposure early in life modulates toddler word recognition by comparing German–English bilingual and German monolingual toddlers’ recognition of words that overlapped to differing degrees, measured by number of phonological features changed, between English and German (e.g., identical, 1-feature change, 2-feature change, 3-feature change, no overlap). Recognition in English was modulated by language background (bilinguals vs. monolinguals) and by the amount of phonological overlap that English words shared with their L1 German translations. L1 word recognition remained unchanged across conditions between monolingual and bilingual toddlers, showing no effect of learning an L2 on L1 word recognition in bilingual toddlers. Furthermore, bilingual toddlers who had a later age of L2 acquisition had better recognition of words in English than those toddlers who acquired English at an earlier age. The results suggest an important role for L1 phonological experience on L2 word recognition in early bilingual word recognition.
... Familiarity does not just help children learn co-occurring words; it also helps infants extract similarsounding words from speech. For instance, in one study (16), 7-month-olds who were familiarized with the word L€ offel (spoon) could segment the similar-sounding word L€ okkel (a nonword) from speech more easily than they could segment words that did not sound similar to L€ okkel. This suggests that every word a child knows could help learn new words, leading to a potentially cascading explosion of words once the child has learned a few initial words. ...
Article
Most children can produce a few words by the end of their first year and rapidly acquire almost 30 times as many words in the following year. Although this general pattern remains the same for children learning different languages, the words individual children know are considerably different. In this article, we consider the possibility that children are an important source of variability in early vocabulary acquisition in the context of curiosity‐driven approaches to language learning. In particular, we review research that supports two interrelated claims: that what children know and what children are interested in interact in shaping what children learn. We suggest that this, as well as the possibility that children are motivated intrinsically to learn language, sets the stage for early vocabulary learning.
... Several paradigms have been devised with the goal to test the lexical knowledge of differently aged infant populations. One way to address whether infants represent detail is to take infants' ability to detect mispronunciations of words as a measure of specificity and detailedness of early words (for a review, see Altvater-Mackensen & Mani, 2013). The namebased categorization paradigm (Nazzi, Floccia, Moquet, & Butler, 2009;Nazzi & New, 2007) and the intermodal preferential looking paradigm (Golinkoff, Hirsh-Pasek, Cauley, & Gordon, 1987;Golinkoff, Ma, Song, & Hirsh-Pasek, 2013) were adapted to investigate infants' lexical knowledge, along with paradigms originally used to measure perceptual skills such as the head turn preference (Hallé & de Boysson-Bardies, 1996;Jusczyk & Aslin, 1995) and switch paradigms (Stager & Werker, 1997). ...
Thesis
Full-text available
Infants' lexical processing is modulated by featural manipulations made to words, suggesting that early lexical representations are sufficiently specified to establish a match with the corresponding label. However, the precise degree of detail in early words requires further investigation due to equivocal findings. We studied this question by assessing children’s sensitivity to the degree of featural manipulation (Chapters 2 and 3), and sensitivity to the featural makeup of homorganic and heterorganic consonant clusters (Chapter 4). Gradient sensitivity on the one hand and sensitivity to homorganicity on the other hand would suggest that lexical processing makes use of sub-phonemic information, which in turn would indicate that early words contain sub-phonemic detail. The studies presented in this thesis assess children’s sensitivity to sub-phonemic detail using minimally demanding online paradigms suitable for infants: single-picture pupillometry and intermodal preferential looking. Such paradigms have the potential to uncover lexical knowledge that may be masked otherwise due to cognitive limitations. The study reported in Chapter 2 obtained a differential response in pupil dilation to the degree of featural manipulation, a result consistent with gradient sensitivity. The study reported in Chapter 3 obtained a differential response in proportion of looking time and pupil dilation to the degree of featural manipulation, a result again consistent with gradient sensitivity. The study reported in Chapter 4 obtained a differential response to the manipulation of homorganic and heterorganic consonant clusters, a result consistent with sensitivity to homorganicity. These results suggest that infants' lexical representations are not only specific, but also detailed to the extent that they contain sub-phonemic information.
... Clearly there are many functional pressures at play for the lis- tener, the speaker, and the learner, and they do not individually point towards either clumpiness or distinctiveness of wordforms. In the context of word learning, wordform similarity may be both advantageous and disadvantageous: Similar-sounding words (1) minimize the amount of information that needs to be stored (e.g., Storkel & Maekawa, 2005); (2) help for word segmentation (Altvater-Mackensen & Mani, 2013); (3) are easier to recognize because they are composed of highly-probable sequences of sounds (e.g., Jusczyk & Luce, 1994); and (4) help children group words into categories (i.e., nouns, verbs) when phonological prox- imity is aligned with semantic or syntactic classes (Monaghan et al., 2011). Yet when it comes to individual word learning, learn- ers have a hard time learning a novel meaning for a sound string similar to a word they know (e.g., 'tog' a phonological neighbor of the familiar word 'dog'; e.g., Swingley & Aslin, 2007) and this disadvantage is even greater when phonological similarity is aligned with syntactic or semantic similarity ( . ...
Article
Recent evidence suggests that cognitive pressures associated with language acquisition and use could affect the organization of the lexicon. On one hand, consistent with noisy channel models of language (e.g., Levy, 2008), the phonological distance between wordforms should be maximized to avoid perceptual confusability (a pressure for dispersion). On the other hand, a lexicon with high phonological regularity would be simpler to learn, remember and produce (e.g., Monaghan et al., 2011) (a pressure for clumpiness). Here we investigate wordform similarity in the lexicon, using measures of word distance (e.g., phonological neighborhood density) to ask whether there is evidence for dispersion or clumpiness of wordforms in the lexicon. We develop a novel method to compare lexicons to phonotactically-controlled baselines that provide a null hypothesis for how clumpy or sparse wordforms would be as the result of only phonotactics. Results for four languages, Dutch, English, German and French, show that the space of monomorphemic wordforms is clumpier than what would be expected by the best chance model according to a wide variety of measures: minimal pairs, average Levenshtein distance and several network properties. This suggests a fundamental drive for regularity in the lexicon that conflicts with the pressure for words to be as phonologically distinct as possible.
... Computational simulations subsequently supported this hypothesis by demonstrating that lexical acquisition could be facilitated in cases where semantic category knowledge is well organized (Borovsky & Elman, 2006). Other work has indicated that infants tend to learn words in a "clustered" fashion, that is, by preferentially acquiring words that are similar in meaning and sound to other known items, rather than learning unrelated words (Altvater-Mackensen & Mani, 2013;Hills, Maouene, Maouene, Sheya, & Smith, 2009;Steyvers & Tenenbaum, 2005). Importantly, coherence in the semantic microstructure of infants' early vocabularies is associated with vocabulary growth and word learning strategy usage (Beckage, Smith, & Hills, 2011;Yurovsky, Bion, Smith, & Fernald, 2012). ...
Article
Although the size of a child's vocabulary associates with language-processing skills, little is understood regarding how this relation emerges. This investigation asks whether and how the structure of vocabulary knowledge affects language processing in English-learning 24-month-old children (N = 32; 18 F, 14 M). Parental vocabulary report was used to calculate semantic density in several early-acquired semantic categories. Performance on two language-processing tasks (lexical recognition and sentence processing) was compared as a function of semantic density. In both tasks, real-time comprehension was facilitated for higher density items, whereas lower density items experienced more interference. The findings indicate that language-processing skills develop heterogeneously and are influenced by the semantic network surrounding a known word.
... This is even true for third-party or observational contexts in which infants witness a demonstration by another person that is not directed at them (e.g., Akhtar, 2005;Nielsen, Moore, & Mohamedally, 2012). Given that 2-year-olds' word learning system is quite flexible (e.g., Altvater-Mackensen & Mani, 2013aMani, , 2013b, it would be particularly interesting to see whether 2-year-olds are equally proficient in employing implicit posture cues. ...
Article
A considerable amount of research has examined children’s ability to rely on explicit social cues such as pointing to understand others’ referential intentions. Yet, skillful social interaction also requires reliance on and learning from implicit cues (i.e. cues that are not displayed with the explicit intention to teach or inform someone). From an embodied point of view, orienting movements and body orientation are salient cues that reveal something about a person’s intentional relations without being explicit communicative cues. In three experiments, the present study investigated the development of the ability to use body information in a word learning situation. To this end, we presented 2-year-old children, 3.5-year-old children, and adults with movies on an eye-tracking screen in which an actor oriented her upper body to one of two objects while uttering a novel word. The results show that the 3.5-year-old children and adults, but not the 2-year-old children related the novel word to the referred object (Experiment 1, Experiment 2). Yet, when the actor oriented her body to one object while pointing to the other object, children of both age groups relied on the pointing cue (Experiment 3). This suggests that by 3.5 years children use another’s body orientation as an indicator of her intentional relations, but that they prioritize explicit social cues over the implicit body posture cues. Overall, the study support theoretical views that an appreciation of others’ intentional relations does not emerge as an all-or-nothing ability, but rather gradually in the course of early development.
... For example, Schmale and Seidl (2009) showed that when the passages were produced in an unfamiliar foreign accent, American infants could segment words across accents (and across speakers) only at 13 months and not at 9 months, and only at 12 months with non-local regionally accented speech (Schmale, Cristia, Seidl, & Johnson, 2010). Altvater-Mackensen and Mani (2013) reported successful segmentation in 7-month-old German infants when the to-be-segmented words were phonologically close to familiar words (the design of their experiments did not allow determination of whether the infants were able to segment previously unfamiliar words as in Jusczyk & Aslin, 1995). ...
Article
Visual speech cues from a speaker's talking face aid speech segmentation in adults, but despite the importance of speech segmentation in language acquisition, little is known about the possible influence of visual speech on infants' speech segmentation. Here, to investigate whether there is facilitation of speech segmentation by visual information, two groups of English-learning 7-month-old infants were presented with continuous speech passages, one group with auditory-only (AO) speech and the other with auditory-visual (AV) speech. Additionally, the possible relation between infants' relative attention to the speaker's mouth versus eye regions and their segmentation performance was examined. Both the AO and the AV groups of infants successfully segmented words from the continuous speech stream, but segmentation performance persisted for longer for infants in the AV group. Interestingly, while AV group infants showed no significant relation between the relative amount of time spent fixating the speaker's mouth versus eyes and word segmentation, their attention to the mouth was greater than that of AO group infants, especially early in test trials. The results are discussed in relation to the possible pathways through which visual speech cues aid speech perception.
Thesis
Full-text available
The central topic in the current dissertation is how a language user’s mind deals with phonological contrasts, a prerequisite to speech recognition. The conducted experiments were set up to investigate how the language universal contrast between coronal (i.e. front) and dorsal (i.e. back) vowels, as well as the language specific contrast between labial (i.e. round) and nonlabial (i.e. non-round) coronal vowels, are represented in a language where both are lexically contrastive. Language users’ perception of speech was used to study the underlying phonological representation. I collected perception data of Dutch vowel contrasts, using three different methods: (1) Mismatch Negativity (MMN), measuring electrical brain responses through electroencephalography (EEG) (used in adults) (2) Semantic priming, measuring reaction times (used in adults) (3) A word learning variant of the intermodal preferential looking paradigm (IPLP), measuring looking behaviour in toddlers. In all experiments, perceptual asymmetries were assessed to investigate (use of) phonological representations relevant fora three-way contrast.
Article
Full-text available
Very young babies show very refined language skills being able to perceive many features in adult speech. The perception of the mother tongue is essential to language acquisition. This literature review deals with speech perception skills from children under one year of age. Therefore a literature search was performed in 7 databases, in English, French, Portuguese and Spanish, in the period of 2003-2014. With this biblio-graphic research was possible to recognize how language acquisition occurs quickly, and that very young infants are able to use elaborate strategies to initiate such acquisition. RESUMO Bebês muito jovens demonstram habilidades linguísticas bastante refinadas, sendo capazes de perceber várias características na fala do adulto. A percepção da língua materna é, pois, imprescindível para a aquisição da linguagem. Esta revisão de literatura trata das habilidades de percepção de fala dos bebês a partir do nascimento até um ano de idade. Para tanto, foi realizada a busca bibliográfica em 7 bases de dados, nos idiomas inglês, francês, português e espanhol, no período de 2007 a 2014. Com esse levan-tamento bibliográfico foi possível reconhecer como a aquisição da linguagem ocorre de forma rápida e que bebês bem jovens são capazes de utilizar estratégias elaboradas para iniciar tal aquisição.
Thesis
Prematurity is currently an important public health problem in the world that affects 1 in 10 babies worldwide every year. In France, preterm birth has steadily increased from 5.9% in 1995 to 7.3% in 2014. Research has demonstrated that prematurely born children are more susceptible to encounter some difficulties in language development and other cognitive domains than children born fullterm. To date, knowledge on early language abilities in preterm infants remains limited. The first goal of this doctoral research was to specify different speech perception abilities in the first two years of life in preterm infants, comparing their abilities to those of fullterm infants of the same postnatal age. The second goal was to investigate whether degree of prematurity modulates linguistic performance across preterm infants. This thesis is organized in three experimental parts. First, we explored word segmentation (the ability to extract word forms) from fluent speech, an ability that is related to lexical acquisition. Our findings showed that basic segmentation abilities are in place in monolingual preterm infants at 6 months of postnatal age (Exp. 1), since they segment monosyllabic words just like their postnatal (Nishibayashi, Goyet, & Nazzi, 2015) and corrected age (4-month-olds; Exp.2) fullterm peers. However, we also found differences with fullterms. While 6-month-old preterms segment embedded syllables as fullterms do (Nishibayashi et al., 2015), the direction of the effect is reversed, suggesting differential processing mechanisms (Exp. 3). Moreover, at 8 months postnatal age, we failed to find evidence for a consonant bias in recognition of segmented word forms (Exp. 4) as found for fullterms of the same age (Nishibayashi & Nazzi, 2016). Nevertheless, French-dominant bilingual populations were found to segment monosyllabic words in French at 6 months, whether being born pre- or full-term (Exp. 5). In the second part, using eye-tracking techniques, we measured preterm and fullterm infants scanning patterns of a talking face in the native (French) and a non-native (English) language. We found that preterm infants at 8 months postnatal age show different looking behavior than their fullterm counterparts matched on postnatal and maturational age. Compared to fullterm infants who showed different scanning pattern of a face speaking in the two languages, preterm infants showed similar scanning patterns for both languages (Exp. 6). These differential gaze patterns provide a first step to characterize the developmental course of audiovisual speech perception in preterm infants. The third part focused on lexical development. Our results show that preterm infants recognize familiar word forms at 11 months postnatal age (Exp. 7), hence at the same postnatal age as fullterm infants (Hallé & de Boysson-Bardies, 1994). With respect to word production at around 24 months of postnatal age (Exp. 8), we found that preterm infants have smaller vocabularies than fullterms of the same postnatal age, but as a group have similar levels as their fullterm, corrected age peers. However, more preterm infants were below the 10th percentile than expected based on (fullterm) norms, which might constitute an index for early identification of (preterm) infants at risk for linguistic delays. Taken together, our results help us build a more detailed and nuanced picture of early language acquisition in preterm infants, and better understand the relative contribution of environmental input (i.e. exposure to unfiltered auditory and visual input after preterm birth) and brain maturation on this developmental trajectory.
Article
Full-text available
High frequency words play a key role in language acquisition, with recent work suggesting they may serve both speech segmentation and lexical categorisation. However, it is not yet known whether infants can detect novel high frequency words in continuous speech, nor whether they can use them to help learning for segmentation and categorisation at the same time. For instance, when hearing “you eat the biscuit”, can children use the high-frequency words “you” and “the” to segment out “eat” and “biscuit”, and determine their respective lexical categories? We tested this in two experiments. In Experiment 1, we familiarised 12-month-old infants with continuous artificial speech comprising repetitions of target words , which were preceded by high-frequency marker words that distinguished the targets into two distributional categories. In Experiment 2, we repeated the task using the same language but with additional phonological cues to word and category structure. In both studies, we measured learning with head-turn preference tests of segmentation and categorisation, and compared performance against a control group that heard the artificial speech without the marker words (i.e., just the targets). There was no evidence that high frequency words helped either speech segmentation or grammatical categorisation. However, segmentation was seen to improve when the distributional information was supplemented with phonological cues (Experiment 2). In both experiments, exploratory analysis indicated that infants’ looking behaviour was related to their linguistic maturity (indexed by infants’ vocabulary scores) with infants with high versus low vocabulary scores displaying novelty and familiarity preferences, respectively. We propose that high-frequency words must reach a critical threshold of familiarity before they can be of significant benefit to learning.
Article
Children tend to produce words earlier when they are connected to a variety of other words along the phonological and semantic dimensions. Though these semantic and phonological connectivity effects have been extensively documented, little is known about their underlying developmental mechanism. One possibility is that learning is driven by lexical network growth where highly connected words in the child's early lexicon enable learning of similar words. Another possibility is that learning is driven by highly connected words in the external learning environment, instead of highly connected words in the early internal lexicon. The present study tests both scenarios systematically in both the phonological and semantic domains across 10 languages. We show that phonological and semantic connectivity in the learning environment drives growth in both production‐ and comprehension‐based vocabularies, even controlling for word frequency and length. This pattern of findings suggests a word learning process where children harness their statistical learning abilities to detect and learn highly connected words in the learning environment.
Article
This chapter reviews the role of listening in language learning and traces the development of speech perception from the prenatal phase to early school age. Its focus is on experimental findings that illustrate developmental changes in phonological processing and learning. Following the developmental trajectory from the earliest listening experiences to literacy, the chapter addresses how (1) prenatal experience shapes neonatal listening, (2) perception attunes to the native language in infancy, (3) lexical knowledge emerges and shapes listening in toddlerhood, (4) pre‐literate listening capacities form a basis for the development of reading and writing, and (5) the social context modulates the early stages of language learning.
Article
Full-text available
The overall pattern of vocabulary development is relatively similar across children learning different languages. However, there are considerable differences in the words known to individual children. Historically, this variability has been explained in terms of differences in the input. Here, we examine the alternate possibility that children's individual interest in specific natural categories shapes the words they are likely to learn – a child who is more interested in animals will learn a new animal name easier relative to a new vehicle name. Two‐year‐old German‐learning children (N = 39) were exposed to four novel word‐object associations for objects from four different categories. Prior to the word learning task, we measured their interest in the categories that the objects belonged to. Our measure was pupillary change following exposure to familiar objects from these four categories, with increased pupillary change interpreted as increased interest in that category. Children showed more robust learning of word‐object associations from categories they were more interested in relative to categories they were less interested in. We further found that interest in the novel objects themselves influenced learning, with distinct influences of both category interest and object interest on learning. These results suggest that children's interest in different natural categories shapes their word learning. This provides evidence for the strikingly intuitive possibility that a child who is more interested in animals will learn novel animal names easier than a child who is more interested in vehicles.
Article
Word segmentation plays a crucial role in language acquisition, particularly for word learning and syntax development, and possibly predicts later language abilities. Previous studies have suggested that this ability develops differently across languages, possibly affected by the languages' rhythmic properties (Rhythmic Segmentation Hypothesis) and target word location in the prosodic structure (Edge Hypothesis). The present study investigates early word segmentation in a language, European Portuguese, that exhibits both stress-and syllable-timed properties, as well as strong cues to both higher-level prosodic boundaries and the word level. Infants aged 4-10 months old were tested with target words located in utterance-medial and utterance-final positions. Evidence for word segmentation was found early in development but only for utterance-edge located target words, suggesting the more salient prosodic cues play a crucial role. There was some evidence for segmentation in utterance-medial position by 10 months, demonstrating that this ability is not yet fully developed, possibly due to mixed rhythmic properties.
Article
The purpose of the current study was to examine effects of bilingual language input on infant word segmentation and on talker generalization. In the present study, monolingually and bilingually exposed infants were compared on their abilities to recognize familiarized words in speech and to maintain generalizable representations of familiarized words. Words were first presented in the context of sentences to infants and then presented to infants in isolation during a test phase. During test, words were produced by a talker of the same gender and by a talker of the opposite gender. Results demonstrated that both bilingual and monolingual infants were able to recognize familiarized words to a comparable degree. Moreover, both bilingual and monolingual infants recognized words in spite of talker variation. Results demonstrated robust word recognition and talker generalization in monolingual and bilingual infants at 8 months of age.
Article
Full-text available
Very young babies show very refined language skills being able to perceive many features in adult speech. The perception of the mother tongue is therefore essential to language acquisition. This literature review deals with speech perception skills of children less than one year of age. Therefore, a literature search was performed in 7 databases, in English, French, Portuguese and Spanish, in the period of 2003-2014. From this bibliographic research it was possible to recognize how language acquisition occurs quickly, and that very young infants are able to use elaborate strategies to initiate such acquisition.
Article
Infants start learning words, the building blocks of language, at least by 6 months. To do so, they must be able to extract the phonological form of words from running speech. A rich literature has investigated this process, termed word segmentation. We addressed the fundamental question of how infants of different ages segment words from their native language using a meta-analytic approach. Based on previous popular theoretical and experimental work, we expected infants to display familiarity preferences early on, with a switch to novelty preferences as infants become more proficient at processing and segmenting native speech. We also considered the possibility that this switch may occur at different points in time as a function of infants' native language and took into account the impact of various task- and stimulus-related factors that might affect difficulty. The combined results from 168 experiments reporting on data gathered from 3774 infants revealed a persistent familiarity preference across all ages. There was no significant effect of additional factors, including native language and experiment design. Further analyses revealed no sign of selective data collection or reporting. We conclude that models of infant information processing that are frequently cited in this domain may not, in fact, apply in the case of segmenting words from native speech.
Article
Full-text available
How might infant's existing vocabulary affect their ability to learn new words? Specifically, how does the density and token frequency of lexical neighbors in the speech surrounding a child affect that child's ability to learn new word-to-world mappings? The current paper presents a series of studies that demonstrate strong effects of lexical neighborhoods on 17-month-old infant's abilities to learn new words. These effects were created with only a small amount of exposure with little or no opportunity for semantic factors to overlap. Thus, it appears that simply hearing a word can make it easier or harder to learn depending on the number and frequency of items surrounding that word in the lexicon. Lexical neighbors are words that sound similar to a target item. Empirically, they are often defined as words that differ by a single phoneme
Article
Full-text available
The current study examines whether a word that is phonologically similar to more words the child already knows is easier to acquire than a word that is unlike other words. Children aged 20 and 24 months were taught two new words: "wat," which is similar to many words children already know, and "fowk," which is not. Learning of the novel words corresponded to neighborhood density in the individual child's vocabulary. We also examined the influence of prior semantic knowledge on the acquisition of novel words in a connectionist network. Together with the empirical data, this model provides novel insights into how similarity at the phonemic level influences acquisition of semantics.
Article
Full-text available
Much effort has gone into constructing models of how children segment speech and thereby discover the words of their language. Much effort has also gone into constructing models of how adults access their mental lexicons and thereby segment speech into words. In this paper, I explore the possibility of a model that could account for both word discovery by children and on-line segmentation by adults. In particular, I discuss extensions to the distributional regularity (DR) model of Brent and Cartwright (1996) that could yield an account of on-line segmentation as well as word discovery.
Article
Full-text available
It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others' goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.
Article
Full-text available
To examine the possibility that early signal-to-word form mapping capabilities are robust enough to handle substantial indexical variation in the realization of words. Two groups of 7.5-month-olds were tested with the Headturn Preference Procedure. Half of the infants were exposed to words embedded in passages spoken by their mothers and tested on lists of trained and novel isolated words spoken by their fathers. The other half of the infants were yoked pairs listening to unfamiliar speakers. In the test phase, infants listened longer to trained than to novel words, indicating that they successfully segmented the words from the passages. This result was not modulated by infants' familiarity with the speaker. Under more naturalistic listening conditions, 7.5-month-olds exhibit the ability to recognize words in the face of substantial indexical variation regardless of whether speakers are familiar. This suggests that early word representations are, at least to some extent, independent of the speaker's gender and may reflect sophisticated abstraction capabilities on the part of the infants, which would render extreme episodic models of early speech perception untenable. Additional research using similarly ecologically valid testing methods is called for to elucidate the precise nature of early word representations.
Article
Full-text available
Previous studies have shown that 7.5-month-olds can track and encode words in fluent speech, but they fail to equate instances of a word that contrast in talker gender, vocal affect, and fundamental frequency. By 10.5 months, they succeed at generalizing across such variability, marking a clear transition period during which infants' word recognition skills become qualitatively more mature. Here we explore the role of word familiarity in this critical transition and, in particular, whether words that occur frequently in a child's listening environment (i.e., "Mommy" and "Daddy") are more easily recognized when they differ in surface characteristics than those that infants have not previously encountered (termed nonwords). Results demonstrate that words are segmented from continuous speech in a more linguistically mature fashion than nonwords at 7.5 months, but at 10.5 months, both words and nonwords are segmented in a relatively mature fashion. These findings suggest that early word recognition is facilitated in cases where infants have had significant exposure to items, but at later stages, infants are able to segment items regardless of their presumed familiarity.
Article
Full-text available
Do infants implicitly name visually fixated objects whose names are known, and does this information influence their preference for looking at other objects? We presented 18-month-old infants with a picture-based phonological priming task and examined their recognition of named targets in primed (e.g., dog-door) and unrelated (e.g., dog-boat) trials. Infants showed better recognition of the target object in primed than in unrelated trials across three measures. As the prime image was never explicitly named during the experiment, the only explanation for the systematic influence of the prime image on target recognition is that infants, like adults, can implicitly name visually fixated images and that these implicitly generated names can prime infants' subsequent responses in a paired visual-object spoken-word-recognition task.
Article
Full-text available
A series of four experiments examined infants' capacities to detect repeated words in fluent speech. In Experiment 1, 7 1/2-month old American infants were familiarized with two different monosyllabic words and subsequently were presented with passages which either included or did not include the familiar target words embedded in sentences. The infants listened significantly longer to the passages containing the familiar target words than to passages containing unfamiliar words. A comparable experiment with 6-month-olds provided no indication that infants at this age detected the target words in the passages. In Experiment 3, a group of 7 1/2-month-olds was familiarized with two different non-word targets which differed in their initial phonetic segment by only one or two phonetic features from words presented in two of the passages. These infants showed no tendency to listen significantly longer to the passages with the similar sounding words, suggesting that the infants may be matching rather detailed information about the items in the familiarization period to words in the test passages. Finally, Experiment 4 demonstrated that even when the 7 1/2-month-olds were initially familiarized with target words in sentential contexts rather than in isolation, they still showed reliable evidence of recognizing these words during the test phase. Taken together, the results of these studies suggest that some ability to detect words in fluent speech contexts is present by 7 1/2 months of age.
Article
Full-text available
In 4 experiments, adults were familiarized with utterances from an artificial language. Short utterances occurred both in isolation and as part of a longer utterance, either at the edge or in the middle of the longer utterance. After familiarization, participants' recognition memory for fragments of the long utterance was tested. Recognition was greatest for the remainder of the longer utterance after extraction of the short utterance, but only when the short utterance was located at the edge of the long utterance. These results support the incremental distributional regularity optimization (INCDROP) model of speech segmentation and word discovery, which asserts that people segment utterances into familiar and new wordlike units in such a way as to minimize the burden of processing new units. INCDROP suggests that segmentation and word discovery during native-language acquisition may be driven by recognition of familiar units from the start, with no need for transient bootstrapping mechanisms.
Article
Full-text available
A series of 15 experiments was conducted to explore English-learning infants' capacities to segment bisyllabic words from fluent speech. The studies in Part I focused on 7.5 month olds' abilities to segment words with strong/weak stress patterns from fluent speech. The infants demonstrated an ability to detect strong/weak target words in sentential contexts. Moreover, the findings indicated that the infants were responding to the whole words and not to just their strong syllables. In Part II, a parallel series of studies was conducted examining 7.5 month olds' abilities to segment words with weak/strong stress patterns. In contrast with the results for strong/weak words, 7.5 month olds appeared to missegment weak/strong words. They demonstrated a tendency to treat strong syllables as markers of word onsets. In addition, when weak/strong words co-occurred with a particular following weak syllable (e.g., "guitar is"), 7.5 month olds appeared to misperceive these as strong/weak words (e.g., "taris"). The studies in Part III examined the abilities of 10.5 month olds to segment weak/strong words from fluent speech. These older infants were able to segment weak/strong words correctly from the various contexts in which they appeared. Overall, the findings suggest that English learners may rely heavily on stress cues when they begin to segment words from fluent speech. However, within a few months time, infants learn to integrate multiple sources of information about the likely boundaries of words in fluent speech.
Article
Full-text available
Infants' representations of the sound patterns of words were explored by examining the effects of talker variability on the recognition of words in fluent speech. Infants were familiarized with isolated words (e.g., cup and dog) from 1 talker and then heard 4 passages produced by another talker, 2 of which included the familiarized words. At 7.5 months of age, infants attended longer to passages with the familiar words for materials produced by 2 female talkers or 2 male talkers but not for materials by a male and a female talker. These findings suggest a strong role for talker-voice similarity in infants' ability to generalize word tokens. By 10.5 months, infants could generalize different instances of the same word across talkers of the opposite sex. One implication of the present results is that infants' initial representations of the sound structure of words not only include phonetic information but also indexical properties relating to the vocal characteristics of particular talkers.
Article
Full-text available
This research explores the role of phonotactic probability in two-year-olds' production of coda consonants. Twenty-nine children were asked to repeat CVC non-words that were used as labels for pictures of imaginary animals. The CVC non-words were controlled for their phonotactic probabilities, neighbourhood densities, word-likelihood ratings, and contained the identical coda across low and high phonotactic probability pairs. This allowed for comparisons of children's productions of the same coda consonant in low and high phonotactic probability environments. Children were significantly more likely to produce the same coda in high phonotactic probability non-words than in low phonotactic probability non-words. These results are consistent with the hypothesis that phonotactic probability is a predictor of coda production in English. Moreover, this finding provides further evidence for the role of the input and distribution of sound patterns in the ambient language as a basis for phonological acquisition.
Article
Full-text available
How do infants find the words in the tangle of speech that confronts them? The present study shows that by as early as 6 months of age, infants can already exploit highly familiar words-including, but not limited to, their own names-to segment and recognize adjoining, previously unfamiliar words from fluent speech. The head-turn preference procedure was used to familiarize babies with short passages in which a novel word was preceded by a familiar or a novel name. At test, babies recognized the word that followed the familiar name, but not the word that followed the novel name. This is the youngest age at which infants have been shown capable of segmenting fluent speech. Young infants have a powerful aid available to them for cracking the speech code. Their emerging familiarity with particular words, such as their own and other people's names, can provide initial anchors in the speech stream.
Article
Full-text available
The purpose of this study was to differentiate effects of phonotactic probability, the likelihood of occurrence of a sound sequence, and neighborhood density, the number of words that sound similar to a given word, on adult word learning. A second purpose was to determine what aspect of word learning (viz., triggering learning, formation of an initial representation, or integration with existing representations) was influenced by each variable. Thirty-two adults were exposed to 16 nonwords paired with novel objects in a story context. The nonwords orthogonally varied in phonotactic probability and neighborhood density. Learning was measured following 1, 4, and 7 exposures in a picture-naming task. Partially correct (i.e., 2 of 3 phonemes correct) and completely correct responses (i.e., 3 of 3 phonemes correct) were analyzed together and independently to examine emerging and partial representations of new words versus complete and accurate representations of new words. Analysis of partially correct and completely correct responses combined showed that adults learned a lower proportion of high-probability nonwords than low-probability nonwords (i.e., high-probability disadvantage) and learned a higher proportion of high-density nonwords than low-density nonwords (i.e., high-density advantage). Separate analysis of partially correct responses yielded an effect of phonotactic probability only, whereas analysis of completely correct responses yielded an effect of neighborhood density only. These findings suggest that phonological and lexical processing influence different aspects of word learning. In particular, phonotactic probability may aid in triggering new learning, whereas neighborhood density may influence the integration of new lexical representations with existing representations.
Article
Full-text available
The present experiments investigated how the process of statistically segmenting words from fluent speech is linked to the process of mapping meanings to words. Seventeen-month-old infants first participated in a statistical word segmentation task, which was immediately followed by an object-label-learning task. Infants presented with labels that were words in the fluent speech used in the segmentation task were able to learn the object labels. However, infants presented with labels consisting of novel syllable sequences (nonwords; Experiment 1) or familiar sequences with low internal probabilities (part-words; Experiment 2) did not learn the labels. Thus, prior segmentation opportunities, but not mere frequency of exposure, facilitated infants' learning of object labels. This work provides the first demonstration that exposure to word forms in a statistical word segmentation task facilitates subsequent word learning.
Article
Introduction Saffran et al. [1] have established infants' ability to use statistical learning (SL) for word segmentation in an artificial language. In this paper, we present a computational model that is simulated on naturally occurring child-directed speech. We show that the SL mechanism as proposed in [1] does not work. However, if appropriate, and presumably innate, constraints on phonological structures are built in, SL can be used to achieve very accurate segmentation results. Statistical Learning via Local Minima The SL algorithm in [1] makes use of transitional probabilities (TP) between adjacent syllables. For example, given sufficient amount of exposure to English, the learner may establish that in the sequence "prettybaby", both the TP of "pre->ty" and the TP of "ba->by" are higher than the TP of "ty->ba". Thus, the "ty->ba" transition makes a local minimum, a point of lower TP than its neighbors. Even with minimal exposure, 8-month-old infants can use local minima to segment pseudo-words. Replications have been carried out in other cognitive domains such as music and vision, and the ability of SL appears to exist in cotton-top tamarins [2]. This then raises the possibility of SL as an alternative to the domain-specific knowledge of language in child language acquisition. Methods To test the utility of SL in word segmentation, we extracted child-directed English speech transcribed in the CHILDES database. The words were then phonetically transcribed using the CMU/TIMIT dictionary, and the spaces between words were removed. There were 226178 words, consisting of 263660 syllables. The computational model was written in Python and runs under any flavor of Unix.
Article
This paper examines whether there is an asymmetry in production and perception of the stop-fricative contrast by Dutch learning children. The development of stops and fricatives in both word-initial and post-vocalic position is studied. To investigate the acquisition of stops and fricatives in production, longitudinal spontaneous speech data of six Dutch one- to three-year-olds was analyzed. To test infants’ perception of this contrast, a series of word-learning experiments using the Switch paradigm was conducted with 62 Dutch 14-month-olds. The data show similar phonological asymmetries: in both perception and production infants treat stops differently from fricatives. Based on this phonological asymmetry we argue that (a) children use the same lexical representations for perception and production, suggesting that development in perception and production go hand in hand, that (b) early lexical representations are not specified with respect to all features, and that (c) specification does not occur in all prosodic positions at the same time.
Article
Among the earliest and most frequent words that infants hear are their names. Yet little is known about when infants begin to recognize their own names. Using a modified version of the head-turn preference procedure, we tested whether 4.5-month-olds preferred to listen to their own names over foils that were either matched or mismatched for stress pattern. Our findings provide the first evidence that even these young infants recognize the sound patterns of their own names. Infants demonstrated significant preferences for their own names compared with foils that shared the same stress patterns, as well as foils with opposite patterns. The results indicate when infants begin to recognize sound patterns of items frequently uttered in the infants' environments.
Article
Previous research has shown that infants begin to display sensitivities to language-specific phonotactics and probabilistic phonotactics at around 9 months of age. However, certain phonotactic patterns have not yet been examined, such as contrast neutralization, in which phonemic contrasts are neutralized typically in syllable- or word-final position. Thus, the acquisition of contrast neutralization is dependent on infants' ability to perceive certain contrasts in final position. The studies reported here test infants' sensitivity to voicing neutralization in word-final position and infants' discrimination of voicing and place of articulation (POA) contrasts in word-initial and word-final position. Nine and 11-month-old Dutch-learning infants showed no preference for legal versus illegal voicing phonotactics that were contrasted in word-final position. Furthermore, 10-month-old infants showed no discrimination of voicing or POA contrasts in word-final position, whereas they did show sensitivity to the same contrasts in word-initial position. By 16 months, infants were able to discriminate POA contrasts in word-final position, although showing no discrimination of the word-final voicing contrast. These findings have broad implications for models of how learners acquire the phonological structures of their language, for the types of phonotactic structures to which infants are presumed to be sensitive, and for the relative sensitivity to phonemic distinctions by syllable and word position during acquisition.
Article
Structural analyses of developing lexicons have provided evidence for both children’s holistic lexical representations and sensitivity to phonetic segments. In the present investigation, neighbourhood analyses of two children’s (age 3; 6) expressive lexicons, maternal input, and an adult lexicon were conducted. In addition to raw counts and frequencyweighted counts, neighbourhood size was calculated as the proportion of the lexicon to which each target word is similar, to normalize for vocabulary size differences. These analyses revealed that children’s lexicons contain more similar sounding words than previous analyses indicated. Further, neighbourhoods appear denser earlier in development relative to vocabulary size, presumably because children first learn words with more frequent sounds and sound combinations. Neighbourhood density as a proportion of the size of the lexicon then decreases over development as children acquire words with less frequent sounds and sound combinations. These findings suggest that positing fundamentally different lexical representations for children may be premature.
Article
Comprehending spoken words requires a lexicon of sound patterns and knowledge of their referents in the world. Tincoff and Jusczyk (1999) demonstrated that 6-month-olds link the sound patterns “Mommy” and “Daddy” to video images of their parents, but not to other adults. This finding suggests that comprehension emerges at this young age and might take the form of very specific word-world links, as in “Mommy” referring only to the infant’s mother and “Daddy” referring only to the infant’s father. The current study was designed to investigate if 6-month-olds also show evidence of comprehending words that can refer to categories of objects. The results show that 6-month-olds link the sound patterns “hand” and “feet” to videos of an adult’s hand and feet. This finding suggests that very early comprehension has a capacity beyond specific, one-to-one, associations. Future research will need to consider how developing categorization abilities, social experiences, and parent word use influence the beginnings of word comprehension.
Article
Abstract Several strategies have been experimentally,demonstrated,to play a role in infant word,seg- mentation. However, the interactions among these mechanisms, and how they would scale up to a realistic setting of language,learning have not been adequately,explored. This paper presents a series of computational,models,that tests the effectiveness of these strategies with child-directed English. It is shown,that the statistical learning approach,using transitional probabilities [Saffran, J., Aslin, R., & Newport, E. (1996) Statistical learning by 8-month- olds. Science, 274, 1926-1928.] does not reliably extract English words. To be successful, segmentation,mechanisms,must be complemented,by what,appear,to be innate constraints on phonological structures. An non-statistical, computationally simple, and empirically mo- tivated model of word segmentation is proposed, which achieves superior segmentation results compared,to previous work. Gambell & Yang,Word Segmentation
Article
Previous studies of infants' comprehension of words estimated the onset of this ability at 9 months or later. However, these estimates were based on responses to names of relatively immobile, familiar objects. Comprehension of names referring to salient, animated figures (e.g., one's parents) may begin even earlier. In a test of this possibility, 6-month-olds were shown side-by-side videos of their parents while listening to the words "mommy" and "daddy." The infants looked significantly more at the video of the named parent. A second experiment revealed that infants do not associate these words with men and women in general. Infants shown videos of unfamiliar parents did not adjust their looking patterns in response to "mommy" and "daddy.".
Article
This paper considers possible problems researchers might face when interpreting the results of studies that employ variants of the preference procedure. Infants show a tendency to shift their preference from familiar to novel stimuli with increasing exposure to the familiar stimulus, a behaviour that is exploited by the habituation paradigm. This change in attentional preference with exposure leads us to suggest that researchers interested in infants' pre-experimental or spontaneous preferences should beware of the potentially confounding effects of exposing infants to familiarization trials prior to employing the preference procedure. The notion that infant attentional preference is dynamic also calls into question the use of the direction of post-familiarization preference per se when interpreting the knowledge or strategies available to infants. We look into the results of a cross-modal word learning study to show how the interpretation of results may be difficult when infants exhibit a significant preference in an unexpected direction. As a possible solution to this problem we propose that significant preferences in both directions should be sought at multiple intervals over time. Copyright © 2004 John Wiley & Sons, Ltd.
Article
Infants prefer to listen to happy speech. To assess influences of speech affect on early lexical processing, 7.5- and 10.5-month-old infants were familiarized with one word spoken with happy affect and another with neutral affect and then tested on recognition of these words in fluent passages. Infants heard all passages either with happy affect or with neutral affect. Contrary to initial expectations that positive affect would facilitate word recognition, younger infants recognized familiarized words only when affect matched across familiarization and testing. Older infants displayed a more mature pattern of word recognition, recognizing words across variations in affect regardless of the direction of change when the task was somewhat simplified. However, younger infants continued to be limited by affective matching in the simplified task. Early processing advantages thus do not necessarily follow listening preferences. Rather, infants' early lexical representations appear to be dominated by covarying properties of experienced exemplars, whether or not these are ultimately relevant for lexical distinctions.
Article
Infants have been described as 'statistical learners' capable of extracting structure (such as words) from patterned input (such as language). Here, we investigated whether prior knowledge influences how infants track transitional probabilities in word segmentation tasks. Are infants biased by prior experience when engaging in sequential statistical learning? In a laboratory simulation of learning across time, we exposed 9- and 10-month-old infants to a list of either disyllabic or trisyllabic nonsense words, followed by a pause-free speech stream composed of a different set of disyllabic or trisyllabic nonsense words. Listening times revealed successful segmentation of words from fluent speech only when words were uniformly disyllabic or trisyllabic throughout both phases of the experiment. Hearing trisyllabic words during the pre-exposure phase derailed infants' abilities to segment speech into disyllabic words, and vice versa. We conclude that prior knowledge about word length equips infants with perceptual expectations that facilitate efficient processing of subsequent language input.
Article
The goal of this study was to examine the influence of part-word phonotactic probability/neighborhood density on word learning by preschool children with normal vocabularies that varied in size. Ninety-eight children (age 2 ; 11-6 ; 0) were taught consonant-vowel-consonant (CVC) nonwords orthogonally varying in the probability/density of the CV (i.e. body) and VC (i.e. rhyme). Learning was measured via picture naming. Children with the lowest expressive vocabulary scores showed no effect of either CV or VC probability/density, although floor effects could not be ruled out. In contrast, children with low or high expressive vocabulary scores demonstrated sensitivity to part-word probability/density with the nature of the effect varying by group. Children with the highest expressive vocabulary scores displayed yet a third pattern of part-word probability/density effects. Taken together, word learning by preschool children was influenced by part-word probability/density but the nature of this influence appeared to depend on the size of the lexicon.
Article
In order to acquire their native language, infants must learn to identify and segment word forms in continuous speech. This word segmentation ability is thus crucial for language acquisition. Previous behavioral studies have shown that it emerges during the first year of life, and that early segmentation differs according to the language in acquisition. In particular, linguistic rhythm, which differs across classes of languages, has been found to have an early impact on segmentation abilities. For French, behavioral evidence showed that infants could use the rhythmic unit appropriate to their native language (the syllable) to segment fluent speech by 12months of age, but failed to show whole word segmentation at that age, a surprising delay compared to the emergence of segmentation abilities in other languages. Given the implications of such findings, the present study reevaluates the issue of whole word and syllabic segmentation, using an electrophysiological method, high-density ERPs (event-related potentials), rather than a behavioral technique, and by testing French-learning 12-month-olds on bisyllabic word segmentation. The ERP data show evidence of whole word segmentation while also confirming that French-learning infants rely on syllables to segment fluent speech. They establish that segmentation and recognition of words/syllables happen within 500ms of their onset, and raise questions regarding the interaction between syllabic segmentation and multisyllabic word recognition.
Article
Can even a handful of newly learned words help to find further word candidates in a novel spoken language? This study shows that the statistical segmentation of words from speech stream by adults is facilitated by the presence of known words in the stream. This facilitatory effect is immediate as the known words were acquired only minutes before the onset of the speech stream. Our results demonstrate an interplay between top-down lexical segmentation and bottom-up statistical learning, in line with infant research suggesting that integration of multiple cues facilitates early language learning. The ability to simultaneously benefit from both types of word segmentation cues appears to be present through adulthood and can thus contribute to second language learning.
Article
The interaction between prosodic and segmental aspects of infant representations for speech was explored using the head-turn paradigm, with untrained everyday familiar words and phrases as stimuli. At 11 months English-learning infants, like French infants (Hallé & Boysson-Bardies, 1994), attended significantly longer to a list of familiar lexical items than to a phonetically comparable rare list, but 9-month-olds did not. Reversing the stress pattern of the familiar items failed to block word-form recognition in 11-month-olds, although a time-course analysis showed that it delayed the infant response. Changing the initial consonant of English words did block word recognition while change to the second consonant did not. Time-course analyses of both the English and the original French data showed that altering the consonant of the unaccented syllable delays word-form recognition in both languages while change to the accented syllable has a stronger effect in English than in French.
Article
This study tests the claim that children acquire collections of phonologically similar word forms. namely, dense neighborhoods. Age of acquisition (AoA) norms were obtained front two databases: parent report of infant and toddler production and adult self-ratings of AoA. Neighborhood density, word frequency, word length, Density x Frequency and Density x Length were analyzed as potential predictors of AoA using linear regression. Early acquired words were higher in density, higher in word frequency, and shorter in length than late acquired words. Significant interactions provided evidence that the lexical factors predicting AoA varied. depending on the type of word being learned. The implication of these findings for lexical acquisition and language learning are discussed.
Article
Learners rely on a combination of experience-independent and experience-dependent mechanisms to extract information from the environment. Language acquisition involves both types of mechanisms, but most theorists emphasize the relative importance of experience-independent mechanisms. The present study shows that a fundamental task of language acquisition, segmentation of words from fluent speech, can be accomplished by 8-month-old infants based solely on the statistical relationships between neighboring speech sounds. Moreover, this word segmentation was based on statistical learning from only 2 minutes of exposure, suggesting that infants have access to a powerful mechanism for the computation of statistical properties of the language input.
Article
Infants' long-term retention of the sound patterns of words was explored by exposing them to recordings of three children's stories for 10 days during a 2-week period when they were 8 months old. After an interval of 2 weeks, the infants heard lists of words that either occurred frequently or did not occur in the stories. The infants listened significantly longer to the lists of story words. By comparison, a control group of infants who had not been exposed to the stories showed no such preference. The findings suggest that 8-month-olds are beginning to engage in long-term storage of words that occur frequently in speech, which is an important prerequisite for learning language.
Article
A series of four experiments was conducted to determine whether English-learning infants can use allophonic cues to word boundaries to segment words from fluent speech. Infants were familiarized with a pair of two-syllable items, such as nitrates and night rates and then were tested on their ability to detect these same words in fluent speech passages. The presence of allophonic cues to word boundaries did not help 9-month-olds to distinguish one of the familiarized words from an acoustically similar foil. Infants familiarized with nitrates were just as likely to listen to a passage about night rates as they were to listen to one about nitrates. Nevertheless, when the passages contained distributional cues that favored the extraction of the familiarized targets, 9-month-olds were able to segment these items from fluent speech. By the age of 10.5 months, infants were able to rely solely on allophonic cues to locate the familiarized target words in passages. We consider what implications these findings have for understanding how word segmentation skills develop.
Article
For nearly two decades it has been known that infants' perception of speech sounds is affected by native language input during the first year of life. However, definitive evidence of a mechanism to explain these developmental changes in speech perception has remained elusive. The present study provides the first evidence for such a mechanism, showing that the statistical distribution of phonetic variation in the speech signal influences whether 6- and 8-month-old infants discriminate a pair of speech sounds. We familiarized infants with speech sounds from a phonetic continuum, exhibiting either a bimodal or unimodal frequency distribution. During the test phase, only infants in the bimodal condition discriminated tokens from the endpoints of the continuum. These results demonstrate that infants are sensitive to the statistical distribution of speech sounds in the input language, and that this sensitivity influences speech perception.
Article
Though the influences of syntactic and semantic regularity on novel word learning are well documented, considerably less is known about the influence of phonological regularities on lexical acquisition. The influence of phonotactic probability, a measure of the likelihood of occurrence of a sound sequence, on novel word learning is investigated in this study. Thirty-four typically developing children (from ages 3 years 2 months to 6 years 3 months) participated in a multitrial word-learning task involving nonwords of varying phonotactic probability (common vs. rare) paired with unfamiliar object referents. Form and referent learning were tested following increasing numbers of exposures (1 vs. 4 vs. 7) and following a 1-week delay. Correct responses were analyzed to determine whether phonotactic probability affected rate of word learning, and incorrect responses were analyzed to examine whether phonotactic probability affected the formation of semantic representations, lexical representations, or the association between semantic and lexical representations. Results indicated that common sound sequences were learned more rapidly than rare sound sequences across form and referent learning. In addition, phonotactic probability appeared to influence the formation of semantic representations and the association between semantic and lexical representations. These results are integrated with previous findings and theoretical models of language acquisition.
Article
Structural analyses of developing lexicons have provided evidence for both children's holistic lexical representations and sensitivity to phonetic segments. In the present investigation, neighbourhood analyses of two children's (age 3;6) expressive lexicons, maternal input, and an adult lexicon were conducted. In addition to raw counts and frequency-weighted counts, neighbourhood size was calculated as the proportion of the lexicon to which each target word is similar, to normalize for vocabulary size differences. These analyses revealed that children's lexicons contain more similar sounding words than previous analyses indicated. Further, neighbourhoods appear denser earlier in development relative to vocabulary size, presumably because children first learn words with more frequent sounds and sound combinations. Neighbourhood density as a proportion of the size of the lexicon then decreases over development as children acquire words with less frequent sounds and sound combinations. These findings suggest that positing fundamentally different lexical representations for children may be premature.
Article
A series of three experiments examined children's sensitivity to probabilistic phonotactic structure as reflected in the relative frequencies with which speech sounds occur and co-occur in American English. Children, ages 212 and 312 years, participated in a nonword repetition task that examined their sensitivity to the frequency of individual phonetic segments and to the frequency of combinations of segments. After partialling out ease of articulation and lexical variables, both groups of children repeated higher phonotactic frequency nonwords more accurately than they did low phonotactic frequency nonwords, suggesting sensitivity to phoneme frequency. In addition, sensitivity to individual phonetic segments increased with age. Finally, older children, but not younger children, were sensitive to the frequency of larger (diphone) units. These results suggest not only that young children are sensitive to fine-grained acoustic-phonetic information in the developing lexicon but also that sensitivity to all aspects of the sound structure increases over development. Implications for the acoustic nature of both developing and mature lexical representations are discussed.
Article
Studies of cognitive development in human infants have relied almost entirely on descriptive data at the behavioral level - the age at which a particular ability emerges. The underlying mechanisms of cognitive development remain largely unknown, despite attempts to correlate behavioral states with brain states. We argue that research on cognitive development must focus on theories of learning, and that these theories must reveal both the computational principles and the set of constraints that underlie developmental change. We discuss four specific issues in infant learning that gain renewed importance in light of this opinion.
Article
In two experiments, 1.5-year-olds were taught novel words whose sound patterns were phonologically similar to familiar words (novel neighbors) or were not (novel nonneighbors). Learning was tested using a picture-fixation task. In both experiments, children learned the novel nonneighbors but not the novel neighbors. In addition, exposure to the novel neighbors impaired recognition performance on familiar neighbors. Finally, children did not spontaneously use phonological differences to infer that a novel word referred to a novel object. Thus, lexical competition--inhibitory interaction among words in speech comprehension--can prevent children from using their full phonological sensitivity in judging words as novel. These results suggest that word learning in young children, as in adults, relies not only on the discrimination and identification of phonetic categories, but also on evaluating the likelihood that an utterance conveys a new word.
Article
In a landmark study, Jusczyk and Aslin (1995) demonstrated that English-learning infants are able to segment words from continuous speech at 7.5 months of age. In the current study, we explored the possibility that infants segment words from the edges of utterances more readily than the middle of utterances. The same procedure was used as in Jusczyk and Aslin (1995); however, our stimuli were controlled for target word location and infants were given a shorter familiarization time to avoid ceiling effects. Infants were familiarized to one word that always occurred at the edge of an utterance (sentence-initial position for half of the infants and sentence-final position for the other half) and one word that always occurred in sentence-medial position. Our results demonstrate that infants segment words from the edges of an utterance more readily than from the middle of an utterance. In addition, infants segment words from utterance-final position just as readily as they segment words from utterance-initial position. Possible explanations for these results, as well as their implications for current models of the development of word segmentation, are discussed.
Article
This study examines the role of functional morphemes in the earliest stage of lexical development. Recent research showed that prelinguistic infants can perceive functional morphemes. We inquire whether infants use frequent functors to segment potential word forms. French-learning 8-month-olds were familiarized to two utterance types: a novel noun following a functor, and another novel noun following a prosodically matched nonsense functor. After familiarization, infants' segmentation of the two nouns was assessed in a test phase presenting the nouns in isolation. Infants in Experiment 1 showed evidence of using both frequent functors des and mes (as opposed to the nonsense functor kes) to segment the nouns, suggesting also that they had specific representations of the functors. The infrequent functor vos in Experiment 2 did not facilitate segmentation. Frequency is thus a crucial factor. Our findings demonstrate that frequent functors can bootstrap infants into early lexical learning. Furthermore, the effect of functors for initial word segmentation is likely universal.
Lincoln Infant Lab Package 1.0: A new programme package for IPL, Preferential Listening, Habituation and Eyetracking. [www document: Computer software & manual Learning novel neighbors: distributed mappings help children and connectionist models
  • K Meints
  • A R Woodford
  • L Samuelson
  • R Gupta
Meints, K., & Woodford, A. (2008). Lincoln Infant Lab Package 1.0: A new programme package for IPL, Preferential Listening, Habituation and Eyetracking. [www document: Computer software & manual]. URL: http://www.lincoln.ac. uk/psychology/babylab.htm Newman, R., Samuelson, L., & Gupta, R. (2008). Learning novel neighbors: distributed mappings help children and connectionist models. Paper presented at the 30th Annual Conference of the Cognitive Science Society. Washington, DC.
Scope and limits of statistical learning in word segmentation
  • T Gambell
  • C Yang
Gambell, T., & Yang, C. (2003). Scope and limits of statistical learning in word segmentation. In Proceedings of the 34th Northeastern Linguistic Society Meeting (pp. 29-30). Stony Brook, NY.
Lexical neighborhood effects in 17-month-old word learning
  • G Hollich
  • P W Jusczyk
  • P A Luce
Hollich, G., Jusczyk, P.W., & Luce, P.A. (2002). Lexical neighborhood effects in 17-month-old word learning. In B. Skarabela, S. Fish, & A. Do (Eds.), Proceedings of the 26th
Lincoln Infant Lab Package 1.0: A new programme package for IPL, Preferential Listening
  • K Meints
  • A Woodford
Meints, K., & Woodford, A. (2008). Lincoln Infant Lab Package 1.0: A new programme package for IPL, Preferential Listening, Habituation and Eyetracking. [www document: Computer software & manual]. URL: http://www.lincoln.ac. uk/psychology/babylab.htm