Changes to a scene often go unnoticed if the objects of the change are unattended, making change detection an index of where attention is focused during scene perception. We measured change detection in school-age children and young adults by repeatedly alternating two versions of an image. To provide an age-fair assessment we used a bimanual choice rather than open-ended verbal responses. The difference in detection speed and accuracy between 50-ms versus 250-ms blank screens between views indexed change detection in short-term visual memory independent of sensory and response processes. Younger children were significantly less efficient than older participants, especially when an object changed color or had a part deleted. Changes in object orientation were detected more readily. These results point to important differences in the perceptual reality of younger and older children.
Theories of autism have proposed that a bias towards low-level perceptual information, or a featural/surface-biased information-processing style, may compromise higher-level language processing in such individuals. Two experiments, utilizing linguistic stimuli with competing low-level/perceptual and high-level/semantic information, tested processing biases in children with autism and matched controls. Whereas children with autism exhibited superior perceptual processing of speech relative to controls, and showed no evidence of either a perceptual or semantic processing bias, controls showed a tendency to process speech semantically. The data provide partial support to the perceptual theories of autism. It is additionally proposed that the pattern of results may reflect different patterns of attentional focusing towards single or multiple stimulus cues in speech between children with autism and controls.
English-learning 7.5-month-olds are heavily biased to perceive stressed syllables as word onsets. By 11 months, however, infants begin segmenting non-initially stressed words from speech. Using the same artificial language methodology as Johnson and Jusczyk (2001), we explored the possibility that the emergence of this ability is linked to a decreased reliance on prosodic cues to word boundaries accompanied by an increased reliance on syllable distribution cues. In a baseline study, where only statistical cues to word boundaries were present, infants exhibited a familiarity preference for statistical words. When conflicting stress cues were added to the speech stream, infants exhibited a familiarity preference for stress as opposed to statistical words. This was interpreted as evidence that 11-month-olds weight stress cues to word boundaries more heavily than statistical cues. Experiment 2 further investigated these results with a language containing convergent cues to word boundaries. The results of Experiment 2 were not conclusive. A third experiment using new stimuli and a different experimental design supported the conclusion that 11-month-olds rely more heavily on prosodic than statistical cues to word boundaries. We conclude that the emergence of the ability to segment non-initially stressed words from speech is not likely to be tied to an increased reliance on syllable distribution cues relative to stress cues, but instead may emerge due to an increased reliance on and integration of a broad array of segmentation cues.
Abstract Behavioral data establish a dramatic change in infants' phonetic perception between 6 and 12 months of age. Foreign-language phonetic discrimination significantly declines with increasing age. Using a longitudinal design, we examined the electrophysiological responses of 7- and 11-month-old American infants to native and non-native consonant contrasts. Analyses of the event-related potentials (ERP) of the group data at 7 and at 11 months of age demonstrated that infants' discriminatory ERP responses to the non-native contrast are present at 7 months of age but disappear by 11 months of age, consistent with the behavioral data reported in the literature. However, when the same infants were divided into subgroups based on individual ERP components, we found evidence that the infant brain remains sensitive to the non-native contrast at 11 months of age, showing differences in either the P150-250 or the N250-550 time window, depending upon the subgroup. Moreover, we observed an increase in infants' responsiveness to native language consonant contrasts over time. We describe distinct neural patterns in two groups of infants and suggest that their developmental differences may have an impact on language development.
Twelve- and 14-month-old infants' ability to represent another person's visual perspective (Level-1 visual perspective taking) was studied in a looking-time paradigm. Fourteen-month-olds looked longer at a person reaching for and grasping a new object when the old goal-object was visible than when it was invisible to the person (but visible to the infant). These findings are consistent with the interpretation that infants 'rationalized' the person's reach for a new object when the old goal-object was out of sight. Twelve-month-olds did not distinguish between test conditions. The present findings are consistent with recent research on infants' developing understanding of seeing.
Although there is much research on infants' ability to orient in space, little is known regarding the information they use to do so. This research uses a rotating room to evaluate the relative contribution of visual and vestibular information to location of a target following bodily rotation. Adults responded precisely on the basis of visual flow information. Seven-month-olds responded mostly on the basis of visual flow, whereas 9-month-olds responded mostly on the basis of vestibular information, and 12-month-olds responded mostly on the basis of visual information. Unlike adults, infants of all ages showed partial influence by both modalities. Additionally, 7-month-olds were capable of using vestibular information when there was no visual information for movement or stability, and 9-month-olds still relied on vestibular information when visual information was enhanced. These results are discussed in the context of neuroscientific evidence regarding visual-vestibular interaction, and in relation to possible changes in reliance on visual and vestibular information following acquisition of locomotion.
Previous research has revealed that infants can reason correctly about single-event probabilities with small but not large set sizes (Bonatti, 2008; Teglas et al., 2007). The current study asks whether infants can make predictions regarding single-event probability with large set sizes using a novel procedure. Infants completed two trials: A preference trial to determine whether they preferred pink or black lollipops and a test trial where infants saw two jars, one containing mostly pink lollipops and another containing mostly black lollipops. The experimenter removed one occluded lollipop from each jar and placed them in two separate opaque cups. Seventy-eight percent of infants searched in the cup that contained a lollipop from the jar with a higher proportion of their preferred color object, significantly better than chance. Thus infants can reason about single-event probabilities with large set sizes in a choice paradigm, and contrary to most findings in the infant literature, the prediction task used here appears a more sensitive measure than the standard looking-time task.
There is increasing interest in neurobiological methods for investigating the shared representation of action perception and production in early development. We explored the extent and regional specificity of EEG desynchronization in the infant alpha frequency range (6-9 Hz) during action observation and execution in 14-month-old infants. Desynchronization during execution was restricted to central electrode sites, while action observation was associated with a broader desynchronization across frontal, central, and parietal regions. The finding of regional specificity in the overlap between EEG responses to action execution and observation suggests that the rhythm seen in the 6-9 Hz range over central sites in infancy shares certain properties with the adult mu rhythm. The magnitude of EEG desynchronization to action perception and production appears to be smaller for infants than for adults and older children, suggesting developmental change in this measure.
Adaptive behavior requires focusing on relevant tasks while remaining sensitive to novel information. In adult studies of cognitive control, cognitive stability involves maintaining robust cognitive representations while cognitive flexibility involves updating of representations in response to novel information. Previous adult research has shown that the Met allele of the COMT Val(158) Met gene is associated with enhanced cognitive stability whereas the Val allele is associated with enhanced cognitive flexibility. Here we propose that the stability/flexibility framework can also be applied to infant research, with stability mapping onto early indices of behavioral regulation and flexibility mapping onto indices of behavioral reactivity. From this perspective, the present study examined whether COMT genotype was related to 7-month-old infants' reactivity to novel stimuli and behavioral regulation. Cognitive stability and flexibility were assessed using (1) a motor approach task, (2) a habituation task, and (3) a parental-report measure of temperament. Val carriers were faster to reach for novel toys during the motor approach task and received higher scores on the temperament measure of approach to novelty. Met carriers showed enhanced dishabituation to the novel stimulus during the habituation task and received higher scores on the temperament measures of sustained attention and behavioral regulation. Overall, these results are consistent with adult research suggesting that the Met and Val alleles are associated with increased cognitive stability and flexibility, respectively, and thus suggest that COMT genotype may similarly affect cognitive function in infancy.
Absent reference comprehension is a critical achievement of early development, yet little is known about its emergence. In the current study, 12- and 16-month-old infants' recognition of properties of mentioned absent things was used as an index of absent reference comprehension. Infants were presented with displays matching the color and prior spatial location of a mentioned absent object and displays matching the color and prior spatial location of a non-mentioned absent object. Infants could reveal their comprehension of absent reference by directing more looking and gesturing at the display matching the mentioned absent object than at the display matching the non-mentioned absent object. Infants at both ages revealed some tendency to do so, but infants at 16 months revealed more advanced understanding--infants at 16 months, but not at 12 months, coordinated their attention to the display matching properties of the mentioned absent thing with looks to the speaker. Implications for the infants' understanding of communicative intentions are discussed.
In the context of an imitation game, 12- and 18-month-old infants saw an adult do such things as make a toy mouse hop across a mat (with sound effects). In one condition (House), the adult ended by placing the mouse in a toy house, whereas in another condition (No House) there was no house present at the final location. Infants at both ages usually simply put the mouse in the house (ignoring the hopping motion and sound effects) in the House condition, presumably because they interpreted the adult's action in terms of this final goal and so ignored the behavioral means. In contrast, infants copied the adult's action (both the hopping motion and the sound effects) when no house was present, presumably because here infants saw the action itself as the adult's only goal. From very early, infants' social learning is flexible: infants focus on and copy either the end or the means of an adult action as required by the context.
During the second year of life, infants exhibit a video deficit effect. That is, they learn significantly less from a televised demonstration than they learn from a live demonstration. We predicted that repeated exposure to televised demonstrations would increase imitation from television, thereby reducing the video deficit effect. Independent groups of 6- to 18-month-olds were exposed to live or videotaped demonstrations of target actions. Imitation of the target actions was measured 24 hours later. The video segment duration was twice that of the live presentation. Doubling exposure ameliorated the video deficit effect for 12-month-olds but not for 15- and 18-month-olds. The 6-month-olds imitated from television but did not demonstrate a video deficit effect at all, learning equally well from a live and video demonstration. Findings are discussed in terms of the perceptual impoverishment theory and the dual representation theory.
This research revealed both similarities and striking differences in early language proficiency among infants from a broad range of advantaged and disadvantaged families. English-learning infants (n = 48) were followed longitudinally from 18 to 24 months, using real-time measures of spoken language processing. The first goal was to track developmental changes in processing efficiency in relation to vocabulary learning in this diverse sample. The second goal was to examine differences in these crucial aspects of early language development in relation to family socioeconomic status (SES). The most important findings were that significant disparities in vocabulary and language processing efficiency were already evident at 18 months between infants from higher- and lower-SES families, and by 24 months there was a 6-month gap between SES groups in processing skills critical to language development.
Infants follow the gaze direction of others from the middle of the first year of life. In attempting to determine how infants understand the looking behavior of adults, a number of recent studies have blocked the adult's line of sight in some way (e.g. with a blindfold or with a barrier). In contrast, in the current studies an adult looked behind a barrier which blocked the child's line of sight. Using two different control conditions and several different barrier types, 12- and 18-month-old infants locomoted a short distance in order to gain the proper viewing angle to follow an experimenter's gaze to locations behind barriers. These results demonstrate that, contra Butterworth, even 12-month-old infants can follow gaze to locations outside of their current field of view. They also add to growing evidence that 12-month-olds have some understanding of the looking behaviors of others as an act of seeing.
Two experiments examined developmental changes in children's visual recognition of common objects during the period of 18 to 24 months. Experiment 1 examined children's ability to recognize common category instances that presented three different kinds of information: (1) richly detailed and prototypical instances that presented both local and global shape information, color, textural and featural information, (2) the same rich and prototypical shapes but no color, texture or surface featural information, or (3) that presented only abstract and global representations of object shape in terms of geometric volumes. Significant developmental differences were observed only for the abstract shape representations in terms of geometric volumes, the kind of shape representation that has been hypothesized to underlie mature object recognition. Further, these differences were strongly linked in individual children to the number of object names in their productive vocabulary. Experiment 2 replicated these results and showed further that the less advanced children's object recognition was based on the piecemeal use of individual features and parts, rather than overall shape. The results provide further evidence for significant and rapid developmental changes in object recognition during the same period children first learn object names. The implications of the results for theories of visual object recognition, the relation of object recognition to category learning, and underlying developmental processes are discussed.
A substantial body of evidence demonstrates that infants understand the meaning of spoken words from as early as 6 months. Yet little is known about their ability to do so in the absence of any visual referent, which would offer diagnostic evidence for an adult-like, symbolic interpretation of words and their use in language mediated thought. We used the head-turn preference procedure to examine whether infants can generate implicit meanings from word forms alone as early as 18 months of age, and whether they are sensitive to meaningful relationships between words. In one condition, toddlers were presented with lists of words taken from the same taxonomic category (e.g. animals or body parts). In a second condition, words taken from two other categories (e.g. clothes and food items) were interleaved within the same list. Listening times were found to be longer in the related-category condition than in the mixed-category condition, suggesting that infants extract the meaning of spoken words and are sensitive to the semantic relatedness between these words. Our results show that infants have begun to construct the rudiments of a semantic system based on taxonomic relations even before they enter a period of accelerated vocabulary growth.
Recent research indicates that toddlers and infants succeed at various non-verbal spontaneous-response false-belief tasks; here we asked whether toddlers would also succeed at verbal spontaneous-response false-belief tasks that imposed significant linguistic demands. We tested 2.5-year-olds using two novel tasks: a preferential-looking task in which children listened to a false-belief story while looking at a picture book (with matching and non-matching pictures), and a violation-of-expectation task in which children watched an adult 'Subject' answer (correctly or incorrectly) a standard false-belief question. Positive results were obtained with both tasks, despite their linguistic demands. These results (1) support the distinction between spontaneous- and elicited-response tasks by showing that toddlers succeed at verbal false-belief tasks that do not require them to answer direct questions about agents' false beliefs, (2) reinforce claims of robust continuity in early false-belief understanding as assessed by spontaneous-response tasks, and (3) provide researchers with new experimental tasks for exploring early false-belief understanding in neurotypical and autistic populations.
Language development has long been associated with motor development, particularly manual gesture. We examined a variety of motor abilities - manual gesture including symbolic, meaningless and sequential memory, oral motor control, gross and fine motor control - in 129 children aged 21 months. Language abilities were assessed and cognitive and socio-economic measures controlled for. Oral motor control was strongly associated with language production (vocabulary and sentence complexity), with some contribution from symbolic abilities. Language comprehension, however, was associated with cognitive and socio-economic measures. We conclude that symbolic, working memory, and mirror neuron accounts of language-motor control links are limited, but that a common neural and motor substrate for nonverbal and verbal oral movements may drive the motor-language association.
Using an adaptation of the Attentional Networks Test, we investigated aspects of executive control in children with chromosome 22q11.2 deletion syndrome (DS22q11.2), a common but not well understood disorder that produces non-verbal cognitive deficits and a marked incidence of psychopathology. The data revealed that children with DS22q11.2 demonstrated greater difficulty than controls in locating and processing target items in the presence of distracters. Importantly, children with DS22q11.2 showed a deficit in the ability to monitor and adapt to stimulus conflict. These data provide evidence of inadequate conflict adaptation in children with DS22q11.2, a problem that is also present in schizophrenia. The findings of specific executive dysfunction in this group may provide a linkage between particular genetic abnormalities and the development of psychopathology.
Much of human communication and collaboration is predicated on making predictions about others' actions. Humans frequently use predictions about others' action mistakes to correct others and spare them mistakes. Such anticipatory correcting reveals a social motivation for unsolicited helping. Cognitively, it requires forward inferences about others' actions through mental attributions of goal and reality representations. The current study shows that infants spontaneously intervene when an adult is mistaken about the location of an object she is about to retrieve. Infants pointed out a correct location for an adult before she was about to commit a mistake. Infants did not intervene in control conditions when the adult had witnessed the misplacement, or when she did not intend to retrieve the misplaced object. Results suggest that preverbal infants anticipate a person's mistaken action through mental attributions of both her goal and reality representations, and correct her proactively by spontaneously providing unsolicited information.
To date, developmental research has rarely addressed the notion that imitation serves an interpersonal, socially based function. The present research thus examined the role of social engagement on 24-month-olds' imitation by manipulating the social availability of the model. In Experiment 1, the children were more likely to imitate the exact actions of a live socially responsive model compared to a videotaped model who could not provide socially contingent feedback. In Experiment 2, the children were more likely to imitate the exact actions of a model with whom they could communicate via a closed-circuit TV system than a videotaped model who could not provide interactive feedback. This research provides clear evidence that children's imitative behavior is affected by the social nature of the model. These findings are discussed in relation to theories on imitation and the video deficit.
To investigate the interaction between segmental and supra-segmental stress-related information in early word learning, two experiments were conducted with 20- to 24-month-old English-learning children. In an adaptation of the object categorization study designed by Nazzi and Gopnik (2001), children were presented with pairs of novel objects whose labels differed by their initial consonant (Experiment 1) or their medial consonant (Experiment 2). Words were produced with a stress initial (trochaic) or a stress final (iambic) pattern. In both experiments successful word learning was established when the to-be-remembered contrast was embedded in a stressed syllable, but not when embedded in unstressed syllables. This was independent of the overall word pattern, trochaic or iambic, or the location of the phonemic contrast, word-initial or -medial. Results are discussed in light of the use of phonetic information in early lexical acquisition, highlighting the role of lexical stress and ambisyllabicity in early word processing.
Childers and Tomasello (2001) found that training 2 1/2-year-olds on the English transitive construction greatly improves their performance on a post-test in which they must use novel verbs in that construction. In the current study, we replicated Childers and Tomasello's finding, but using a much lower frequency of transitive verbs and models in training. We also used novel verbs that were of a different semantic class to our training verbs, demonstrating that semantic homogeneity is not crucial for generalization. We also replicated the finding that 4-year-olds are significantly more productive than 2 1/2-year-olds with the transitive construction, with the new finding that this is also true for verbs of emission. In addition, 'shared syntactic distribution' of novel verb and training verbs was found to have no observable effect on the number of 2 1/2-year-olds who were productive in the post-test.
Object recognition research is typically conducted using 2D stimuli in lieu of 3D objects. This study investigated the amount and complexity of knowledge gained from 2D stimuli in adult chimpanzees (Pan troglodytes) and young children (aged 3 and 4 years) using a titrated series of cross-dimensional search tasks. Results indicate that 3-year-old children utilize a response rule guided by local features to solve cross-dimensional tasks. Four-year-old toddlers and adult chimpanzees use information about object form and compositional structure from a 2D image to guide their search in three dimensions. Findings have specific implications to research conducted in object recognition/perception and broad relevance to all areas of research and daily living that incorporate 2D displays.