PosterPDF Available

A Learned Label Modulates Object Representations in 10-Month-Old Infants

Authors:

Abstract

See also conference proceedings paper (Twomey & Westermann, 2016)
A Learned Label Modulates Object Representations
in 10-Month-Old Infants
Katherine E. Twomey & Gert Westermann, Lancaster University, UK
Background
Infants begin to link their linguistic and nonlinguistic
representations in the first year, responding to familiar labels
from 6 months (Bergelson & Swingley, 2012)
Converging evidence indicates labels encountered online
affect online nonlinguistic processing, e.g., guiding category
formation (Althaus & Westermann, 2016)
Learned language also affects online category generalization
(e.g., Perry & Samuelson, 2011)
In adults, labels shape cogntion (Lupyan, 2012), but do labels shape
nonlinguistic representations in infancy?
Gliga, Volein & Csibra, 2010 (E2)
Trained 12-month-old infants with 2 novel 3D objects, one
with a novel label, one without
Immediately afterwards, presented infants with images of (a)
labeled object, (b) unlabeled object, (c) completely novel
object in silence
Recorded EEG gamma band response (index of object
encoding)
Gamma band response stronger after the labeled object
relative to unlabeled and novel object
Conclusion: labels modulate object representation
But this label training is unlikely to result in long-term word
learning (Horst & Samuelson, 2008), so whether learned language
modulates learned representations remains unclear
Rationale
Infants’ looking times reflect what they have learned (Fantz,
1964), so we can use looking times to index differences in
representations
Test word learning to be sure effect is based on long-term
representations
This work was supported by the International Centre for Language and
Communicative Development (LuCiD).The support of the Economic and
Social Research Council [ES/L008955/1] is gratefully acknowledged. We
would like to thank the caregivers and infants who made this work
possible.
At-home 3D object training
Caregivers train 10mo infants (N = 24) with 2 novel 3D
objects in 5 minute play session, every day for 7 days.
Train label for one object only (counterbalanced)
After 1 week: lab-based eyetracking task
1.Familiarization: present infants with 8 images of each item,
interleaved, silent 10s trials
2.Word learning test trial: both objects presented
simultaneously, with label, 12s
Look! A tanzer!
Look at that!
…16 trials
Look! A
tanzer!
Familiarization
3000
4000
5000
6000
2468
Trial
Time (ms)
Labeled Unlabeled
Longer looking to
labeled stimulus
(beta = 0.082, SE = 0.032,
χ
2(1) = 5.97, p = .014 )
Word learning
chance
"tanzer"
0.00
0.25
0.50
0.75
1.00
1 2 3 4 5 6 7 8 9 10 11 12
Seconds
Proportion target looking
Response at 6556 ms
Target looking
increase in
2s postlabel from
2s prelabel
(t(22) = 2.00, p = .058)
Individual differences?
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
a b c d e f g h i j k l m n o p q r s t u v w
Participant
Change in proportion target looking
Some (but not all)
10mos responded
to label
Introduction Method Results
Discussion
Two possible mechanisms
Label activation: labeling during training increases
attention (Baldwin & Markman, 1989), during silent familiarization effect
persists / implicit activation of label increases attention (cf.
Mani & Plunkett. 2010)
Labels and objects represented separately; labels augment
object representations over experience (Westermann & Mareschal, 2014)
Novelty: label and object representation become
integrated during training; absence of label during
familiarization causes novelty response
Labels are features of object representations in the same
way as color/shape/texture (Gliozzi, Mayor, Hu & Plunkett, 2010)
Computational test of these alternatives: see Capelier-
Mourguy, Twomey & Westermann (2016, August). A
neurocomputational model of the effect of learned labels
on infants’ object representations.
References
Acknowledgements
Althaus, N., & Westermann, G. (2016). Labels constructively shape categories in 10-month-
old infants. JECP.
Baldwin, D. A., & Markman, E. M. (1989). Establishing word-object relations: A first step.
Child Dev.
Bergelson, E., & Swingley, D. (2012). At 6–9 months, human infants know the meanings of
many common nouns. PNAS.
Fantz, R. L. (1964). Visual experience in infants: Decreased attention familar patterns
relative to novel ones. Science.
Gliga, T., Volein, A., & Csibra, G. (2010). Verbal labels modulate perceptual object
processing in 1-year-old children. J Cognitive Neurosci.
Gliozzi, V., Mayor, J., Hu, J. F., & Plunkett, K. (2009). Labels as features (not names) for
infant categorization: A neurocomputational approach. Cognitive Sci.
Lupyan, G. (2012). Linguistically modulated perception and cognition: the label feedback
hypothesis. Front Psych.
Mani, N., & Plunkett, K. (2010). In the infant’s mind’s ear: Evidence for implicit naming in
18-month-olds. Psych Sci.
Perry, L. K., & Samuelson, L. K. (2011). The shape of the vocabulary predicts the shape of
the bias. Front Psych.
!
Contact
k.twomey@lancaster.ac.uk
gert.westermann@lancater.ac.uk
www.lucid.ac.uk / wp.lancs.ac.uk/westermann-lab/

Supplementary resource (1)

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
How do infants’ emerging language abilities affect their organization of objects into categories? The question of whether labels can shape the early perceptual categories formed by young infants has received considerable attention, but evidence has remained inconclusive. Here, 10-month-old infants (N = 80) were familiarized with a series of morphed stimuli along a continuum that can be seen as either one category or two categories. Infants formed one category when the stimuli were presented in silence or paired with the same label, but they divided the stimulus set into two categories when half of the stimuli were paired with one label and half with another label. Pairing the stimuli with two different nonlinguistic sounds did not lead to the same result. In this case, infants showed evidence for the formation of a single category, indicating that nonlinguistic sounds do not cause infants to divide a category. These results suggest that labels and visual perceptual information interact in category formation, with labels having the potential to constructively shape category structures already in preverbal infants, and that nonlinguistic sounds do not have the same effect.
Article
Full-text available
It is widely accepted that infants begin learning their native language not by learning words, but by discovering features of the speech signal: consonants, vowels, and combinations of these sounds. Learning to understand words, as opposed to just perceiving their sounds, is said to come later, between 9 and 15 mo of age, when infants develop a capacity for interpreting others' goals and intentions. Here, we demonstrate that this consensus about the developmental sequence of human language learning is flawed: in fact, infants already know the meanings of several common words from the age of 6 mo onward. We presented 6- to 9-mo-old infants with sets of pictures to view while their parent named a picture in each set. Over this entire age range, infants directed their gaze to the named pictures, indicating their understanding of spoken words. Because the words were not trained in the laboratory, the results show that even young infants learn ordinary words through daily experience with language. This surprising accomplishment indicates that, contrary to prevailing beliefs, either infants can already grasp the referential intentions of adults at 6 mo or infants can learn words before this ability emerges. The precocious discovery of word meanings suggests a perspective in which learning vocabulary and learning the sound structure of spoken language go hand in hand as language acquisition begins.
Article
Full-text available
How does language impact cognition and perception? A growing number of studies show that language, and specifically the practice of labeling, can exert extremely rapid and pervasive effects on putatively non-verbal processes such as categorization, visual discrimination, and even simply detecting the presence of a stimulus. Progress on the empirical front, however, has not been accompanied by progress in understanding the mechanisms by which language affects these processes. One puzzle is how effects of language can be both deep, in the sense of affecting even basic visual processes, and yet vulnerable to manipulations such as verbal interference, which can sometimes nullify effects of language. In this paper, I review some of the evidence for effects of language on cognition and perception, showing that performance on tasks that have been presumed to be non-verbal is rapidly modulated by language. I argue that a clearer understanding of the relationship between language and cognition can be achieved by rejecting the distinction between verbal and non-verbal representations and by adopting a framework in which language modulates ongoing cognitive and perceptual processing in a flexible and task-dependent manner.
Article
Full-text available
Children acquire attentional biases that help them generalize novel words to novel objects. Researchers have proposed that these biases arise from regularities in the early noun vocabulary children learn and suggest that the specifics of the biases should be tied to the specifics of individual children's vocabularies. However, evidence supporting this proposal to date comes from studies of group means. The current study examines the relations between the statistics of the nouns young children learn and the similarities and differences in the biases they demonstrate. We show that individual differences in vocabulary structure predict individual differences in novel noun generalization. Thus, these data support the proposal that word learning biases emerge from the regularities present in individual children's vocabularies and, importantly, that children's on-line attention during an experiment is mediated by instances of past learning.
Article
Full-text available
A substantial body of experimental evidence has demonstrated that labels have an impact on infant categorization processes. Yet little is known regarding the nature of the mechanisms by which this effect is achieved. We distinguish between two competing accounts: supervised name-based categorization and unsupervised feature-based categorization. We describe a neurocomputational model of infant visual categorization, based on self-organizing maps, that implements the unsupervised feature-based approach. The model successfully reproduces experiments demonstrating the impact of labeling on infant visual categorization reported in Plunkett, Hu, and Cohen (2008). It mimics infant behavior in both the familiarization and testing phases of the procedure, using a training regime that involves only single presentations of each stimulus and using just 24 participant networks per experiment. The model predicts that the observed behavior in infants is due to a transient form of learning that might lead to the emergence of hierarchically organized categorical structure and that the impact of labels on categorization is influenced by the perceived similarity and the sequence in which the objects are presented. The results suggest that early in development, say before 12 months old, labels need not act as invitations to form categories nor highlight the commonalities between objects, but they may play a more mundane but nevertheless powerful role as additional features that are processed in the same fashion as other features that characterize objects and object categories.
Article
Full-text available
Do infants implicitly name visually fixated objects whose names are known, and does this information influence their preference for looking at other objects? We presented 18-month-old infants with a picture-based phonological priming task and examined their recognition of named targets in primed (e.g., dog-door) and unrelated (e.g., dog-boat) trials. Infants showed better recognition of the target object in primed than in unrelated trials across three measures. As the prime image was never explicitly named during the experiment, the only explanation for the systematic influence of the prime image on target recognition is that infants, like adults, can implicitly name visually fixated images and that these implicitly generated names can prime infants' subsequent responses in a paired visual-object spoken-word-recognition task.
Article
Full-text available
Whether verbal labels help infants visually process and categorize objects is a contentious issue. Using electroencephalography, we investigated whether possessing familiar or novel labels for objects directly enhances 1-year-old children's neural processes underlying the perception of those objects. We found enhanced gamma-band (20-60 Hz) oscillatory activity over the visual cortex in response to seeing objects with labels familiar to the infant (Experiment 1) and those with novel labels just taught to the infant (Experiment 2). No such effect was observed for objects that infants were familiar with but had no label for. These results demonstrate that learning verbal labels modulates how the visual system processes the images of the associated objects and suggest a possible top-down influence of semantic knowledge on object perception.
Article
This work explores how infants in the early phases of acquiring language come to establish an initial mapping between objects and their labels. If infants are biased to attend more to objects in the presence of language, that could help them to note word-object object pairings. To test this, a first study compared how long 18 10-14-month-old infants looked at unfamiliar toys when labeling phrases accompanied their presentation, versus when no labeling phrases were provided. As predicted, labeling the toys increased infants' attention to them. A second study examined whether the presence of labeling phrases increased infants' attention to objects over and above what pointing, a powerful nonlinguistic method for directing infants' attention, could accomplish on its own. 22 infants from 2 age groups (10-14- and 17-20-month-olds) were shown pairs of unfamiliar toys in 2 situations: (a) in a pointing alone condition, where the experimenter pointed a number of times at one of the toys, and (b) in a labeling + pointing condition, where the experimenter labeled the target toy while pointing to it. While the pointing occurred, infants looked just as long at the target toy whether or not it was labeled. During a subsequent play period in which no labels were uttered, however, infants gazed longer at the target toys that had been labeled than at those that had not. Thus language can increase infants' attention to objects beyond the time that labeling actually occurs. These studies do not pinpoint which aspects of labeling behavior contribute to the attentional facilitation effect that was observed. In any case, however, this tendency for language to sustain infants' attention to objects may help them learn the mappings between words and objects.
Article
A complex visual pattern presented for ten successive 1-minute exposure periods was fixated progressively less than comparable novel stimuli by infants 2 to 6 months old. This indicates the occurrence of recognition and habituation of visual responsiveness to specific patterns, and suggests that familiarization with the environment begins through visual exploration before more active exploration is possible.