ArticlePDF Available

Abstract and Figures

Infants rapidly learn both linguistic and nonlinguistic representations of their environment, and begin to link these from around six months. While there is an increasing body of evidence for the effect of labels heard in-task on infants’ online processing, whether infants’ learned linguistic representations shape learned nonlinguistic representations is unclear. In the current study 10-month-old infants were trained over the course of a week with two 3D objects, one labeled and one unlabeled. Infants then took part in a looking time task in which 2D images of the objects were presented individually in a silent familiarization phase, followed by a preferential looking trial. During the critical familiarization phase, infants looked for longer at the previously labeled stimulus than the unlabeled stimulus, suggesting that learning a label for an object had shaped infants’ representations as indexed by looking times. We interpret these results in terms of label activation and novelty response accounts, and discuss implications for our understanding of early representational development.
Content may be subject to copyright.
Learned labels shape pre-speech infants’ object representations
Katherine E. Twomey1 & Gert Westermann1
1Lancaster University, UK
Word count: 5,698
Author note
Katherine E. Twomey, Department of Psychology, Lancaster University, UK.
Gert Westermann, Department of Psychology, Lancaster University, UK.
Correspondence concerning this article should be addressed to Katherine E.
Twomey, Department of Psychology, Lancaster University, Bailrigg, Lancaster, UK,
LA1 4YF. Contact:
This work was supported by the International Centre for Language and
Communicative Development (LuCiD; [ES/L008955/1]), and an ESRC Future
Research Leaders fellowship awarded to KT [ES/N01703X/1]. The support of the
Economic and Social Research Council is gratefully acknowledged. We would like to
thank the caregivers and infants who made this work possible.
Infants rapidly learn both linguistic and nonlinguistic representations of their
environment, and begin to link these from around six months. While there is an
increasing body of evidence for the effect of labels heard in-task on infants’ online
processing, whether infants’ learned linguistic representations shape learned
nonlinguistic representations is unclear. In the current study 10-month-old infants
were trained over the course of a week with two 3D objects, one labeled and one
unlabeled. Infants then took part in a looking time task in which 2D images of the
objects were presented individually in a silent familiarization phase, followed by a
preferential looking trial. During the critical familiarization phase, infants looked for
longer at the previously labeled stimulus than the unlabeled stimulus, suggesting that
learning a label for an object had shaped infants’ representations as indexed by
looking times. We interpret these results in terms of label activation and novelty
response accounts, and discuss implications for our understanding of early
representational development.
Keywords: cognitive development, representational development, word learning, label
Labels shape pre-speech infants’ object representations
Infants’ early-acquired perceptual representations affect the way they respond
to the world around them. For example, by three months they have learned face
representations which enable them to differentiate between own-race and other-race
faces (Kelly et al., 2005). Similarly, just four months of experience of pets in the
home are sufficient for infants to selectively attend to the most informative areas of
animal stimuli in looking time tasks (Hurley & Oakes, 2015). This early
representational development is powerful: in a two-month training study, just two
minutes’ experience per day with images of novel objects prompted five-month-old
infants to learn representations which were sufficiently robust to affect their behavior
in a later 3D object examining task presented at the end of the training period
(Bornstein & Mash, 2010). Importantly, however, early learning is not just perceptual:
in the early days, weeks and months infants also acquire linguistic representations.
Even newborns can discriminate their native language from a non-native language
(Moon, Lagercrantz, & Kuhl, 2013) and detect grammatical categories in maternal
speech (Shi, Werker, & Morgan, 1999). By eight months infants can detect linguistic
structure and segment words by tracking co-occurrence statistics in the speech sounds
they hear (Saffran, Aslin, & Newport, 1996).
Clearly, stored linguistic and nonlinguistic representations are linked –
infants’ earliest words refer to the objects they experience on a daily basis (Clerkin,
Hart, Rehg, Yu & Smith, 2017). The first indications of these links appear before the
onset of speech: infants as young as six months can correctly identify the referents of
frequently-heard words (Bergelson & Swingley, 2012; see also Delle Luche, Floccia,
Granjon, & Nazzi, 2016). These early label-object associations are strengthened
incrementally over the long term via cross-situational learning (Smith & Yu, 2008), in
which repeated encounters of label-object co-occurrences in a variety of contexts
eventually lead to long-term word learning.
Importantly, these stored label-object representations can shape online
processing. For example, the structure of infants’ early vocabulary affects how they
generalize category labels in-the moment: toddlers whose vocabulary is dominated by
count nouns which refer to solid objects in shape-based categories show a strong
tendency to generalize new nouns based on the shape of their referents, while this bias
is reduced for children with a large number of nouns that do not follow this pattern
(Perry & Samuelson, 2011). Equally, labels heard in-the-moment also begin to exert a
powerful influence on processing during the first year. For example, the in-task
presence of a novel label can direct ten-month-old infants’ attention to commonalities
between category exemplars and guide online category formation (Althaus &
Plunkett, 2015; Plunkett, Hu, & Cohen, 2008), and labels themselves facilitate the
formation of new representations over other auditory cues (e.g., Althaus &
Westermann, 2016; for a review, see Robinson, Best, Deng, & Sloutsky, 2012).
Whereas it has been shown that both learned and novel linguistic
representations affect infants’ nonlinguistic processing in-the-moment, it is not clear
how linguistic experience shapes infants’ learned nonlinguistic representational
structure. In adults, learned language has repeatedly been shown to shape
representation in a range of perceptual domains, for example color, shape and music
(Dolscheid, Shayan, Majid, & Casasanto, 2013; Lupyan, 2016; Winawer et al., 2007).
There is some evidence for similar effects in older children: in a target detection task
in which a colored target was presented either on a same- or different-color-category
background, toddlers who knew the relevant color labels detected targets more
quickly in the left visual field, in line with adults in similar tasks. However, toddlers
still learning color terms detected targets more quickly in the right visual field,
suggesting that language learning may shape early perceptual representations, in the
color domain at least (Franklin et al., 2008).
To our knowledge only a single study has explicitly explored the relationship
between learned labels and nonlinguistic representations in infants. Gliga, Volein and
Csibra (2010; E2) trained infants with novel 3D objects, labeling one (Look at the
blicket!) and not the other (Look at that!) in a four-minute play session. Immediately
following training infants were presented with images of the two trained objects and a
third, novel object while their EEG responses were recorded. Gamma-band activity,
which has been interpreted as a neurophysiological marker of object encoding, was
significantly stronger in response to the labeled object than to the unlabeled or novel
object, suggesting that labeling modulated infants’ object representations. However, it
is unlikely that the training provided was sufficient for these 12-month-olds to retain
the novel word over an extended period (Horst & Samuelson, 2008), and it is
therefore possible that the task tapped temporary representations held in short-term
memory. Thus, whether or not infants’ learned language shapes their nonlinguistic
representations remains unclear.
The following sections describe a test of this hypothesis in pre-speech infants.
We asked parents of 10-month-olds to train their infants with two novel toy objects at
home over a week, labeling one object with a novel word (labeled object), but not the
other object (unlabeled object). After this week-long training we recorded infants’
looking times in a familiarization task where they were shown both objects in silence.
Since it is long-established that infants’ looking times in familiarization tasks reflect
the characteristics of their learned representations (Fantz, 1964), an effect of language
on infants’ long term object representations should be indexed by differences in
looking times between the previously labeled and the previously unlabeled object.
This being so, infants who had learned robust label-object associations should show
the effect most strongly. Thus, we also included a single preferential looking trial in
which both images appeared simultaneously, accompanied by the label, and used
infants’ responses on this trial as a proxy for this learning. This trial was included
after familiarization to prevent the presentation of the label from biasing infants’
responses in the critical familiarization phase.
Twenty-four 10-month-old infants (12 girls; Mage = 10 months, 23 days; SD = 14.15
days, range = 9 months, 26 days – 11 months, 13 days) participated. All infants were
typically developing and monolingual English learning with no family history of color
blindness. Data from an additional six infants were excluded due to failure to start or
complete the eyetracking task because of excess movement and/or crying (2);
experimenter error (1), low eyetracker sample rate (< 35%; 1); and failure to complete
sufficient training sessions (2). All participants returned for the test session
approximately a week after the introductory session (6 days: 2; 7 days: 19; 8 days: 3).
Families were recruited by contacting caregivers who had previously indicated
interest in participating in child development research. Caregivers’ travel expenses for
both visits were reimbursed and infants were given a storybook for participating.
Play sessions. 3D stimuli are depicted in Figure 1, and consisted of two age-
appropriate wooden toy objects (castanets and two wooden balls joined with string),
chosen because they are novel to 10-month-old infants (Fenson et al., 1994). Objects
were approximately equal in size and were painted either red or blue using non-toxic
paint. The label was tanzer, a pseudoword selected because it is plausible in English
and was used in a previous developmental study (Horst & Twomey, 2012).
Looking time task. Familiarization stimuli were digital photographs of the individual
training objects presented centrally on a white background. Stimuli for the
preferential looking trial were photographs of the training objects presented side-by-
side on a white background. The auditory stimulus for the preferential looking trial
consisted of the phrase Look! A tanzer! spoken by a female speaker from the local
area and recorded and edited for timing and clarity in Audacity 2.0.6. The phrase
onset was at 4000 ms, label onset at 5171 ms, and label offset at 6000 ms. Calibration
and attention getter stimuli were a short video of a bouncing cartoon bird,
accompanied by a jingling sound.
Procedure and Design
Visit 1: Play session. Each infant received two objects. Objects’ color and label were
counterbalanced between participants such that for each object type, each infant
received one red and one blue item. Each exemplar was labeled for half of the infants
and unlabeled for the other half of the infants.
First, the experimenter showed the caregiver the two objects and asked them
whether their child had similar toys at home. Substitute items were available;
however, no child had prior experience of the objects. The experimenter then
explained that she would demonstrate a play session, with the goal of teaching the
infant a word for one of the objects. She then asked the caregiver to conduct a similar
play session for five minutes, every day for one week, and explained that they would
be invited to return to the lab after seven days to take part in a looking time study.
The play session took place in a quiet, infant-friendly room with the caregiver
present at all times. Before the session began, the experimenter emphasized that
caregivers should not invent a name for the unlabeled object: only the label tanzer
should be used, and only in reference to the labeled object. With the parent watching,
the experimenter then sat opposite the infant on the floor and introduced both toys by
holding them in front of the infant and allowing the infant to take the toys in their own
time. While the infant was looking at the toy the experimenter referred to them using
a label or a pronoun as appropriate, for example “Look, a tanzer!” (labeled), “Look at
this!” (unlabeled). The experimenter explained to the caregiver that they should
encourage their child to interact with both toys for an approximately equal amount of
time, and that their child should be allowed to play with both toys at the same time (to
encourage comparison, which promotes encoding; Gentner & Namy, 1999; Oakes,
Kovack-Lesh, & Horst, 2009). Infants heard the label approximately twice every
fifteen seconds. After the play session caregivers were given the toys, written
instructions and a sticker chart on which to record their play sessions.
Visit 2: Looking time task. Before the second session began caregivers were asked
whether they had completed all play sessions. All but three parents reported
completing a play session on all seven days. Verbal report tallied with the sticker
charts (7 sessions: 21; 6 sessions: 3). Parents reported no difficulty in completing the
sessions, although some reported an overall decline of interest in the stimuli by the
end of the week.
The looking time task took place in a quiet, dimly-lit testing room. Children
were seated on their caregiver’s lap 50-70 cm in front of a 21.5” 1920 x 1080
computer screen. A Tobii X120 eyetracker located beneath the screen recorded the
child’s gaze location at 17 ms intervals, and a video camera above the screen recorded
the caregiver and child throughout the procedure. Caregivers were instructed not to
interact with their child or look at the screen during the task to avoid biasing their
child’s behavior.
The eyetracker was first calibrated using a five-point infant calibration
procedure. We displayed an attention-grabbing animation in the four corners and
center of a 3 x 3 grid on a grey background accompanied by a jingling noise, and
recorded infants’ orientation to it with a key press. Calibration accuracy was checked
and repeated if necessary (1 infant).
The attention-getting stimulus then appeared in the center of the screen.
Immediately after the infant oriented towards the attention-getter, the experimenter
began the familiarization phase using a keypress. Familiarization stimuli were
presented individually in silence for 10 s. Infants saw eight identical images of the
previously-labeled object and eight identical images of the unlabeled object.
Presentation of both objects was interleaved. The object shown first was
counterbalanced between children. Each trial was immediately followed by the
attention-getter. Subsequent trials were advanced manually by the experimenter once
the infant had reoriented to the screen, or began automatically after 5 s.
Immediately following the familiarization trials a single preferential looking
trial was presented in an identical manner. Left-right positioning of the objects
(castanet/ball and labeled/unlabeled-label) was counterbalanced between children.
The preferential looking trial was 12 s long, with auditory stimulus beginning at 4000
ms, label onset at 5171 ms and offset at 6000 ms.
Coding and data cleaning
Timestamps for which the eyetracker failed to reliably detect either eye were
excluded (41.06%; this is broadly in line with existing studies of data reliability in
infant eyetracking work; Wass, Smith, & Johnson, 2012). On each familiarization
trial, the AOI was centered on the single image and measured approximately 950 by
700 pixels. On the preferential looking trial, AOIs divided the screen in half
horizontally and were centered vertically, measuring 980 by 860 pixels. Individual
gaze samples were numerically coded (-1 = look away, 0 = background look, 1 = AOI
look), creating a raw looking time measure. For familiarization trials looks away from
the screen were discarded (16.31%) and for preferential looking trials non-AOI looks
were discarded (0.08%). This resulted in a final dataset of 89,099 familiarization trial
and 13,213 preferential looking trial gaze samples.
If learned labels shape infants’ long-term object representations, we hypothesized that
infants who had learned an association between the label and the corresponding
stimulus should exhibit differences in looking times when viewing the previously
labeled versus the previously unlabeled stimulus, even when these stimuli were
presented in silence. Thus, our primary variable of interest was looking times during
the familiarization phase. However, on this account, infants with more robust label
associations should show greater differences in looking time. Thus, we first analyzed
the preferential looking trial to obtain an index of individual infants’ label responses
as a proxy for the strength of their label-object associations.
In looking time studies employing the habituation paradigm an increase in
looking to a novel stimulus after the habituation phase is taken as an indicator that
infants are attending to the task, allowing researchers to rule out fatigue as a cause of
any subsequent effects (Oakes, 2010). It is possible that fatigue could affect children’s
looking times in the current study, particularly since we employed a fixed duration
familiarization phase rather than an infant-controlled habitation phase. To rule out the
influence of fatigue on infants’ preferential looking, we compared their pre-label
looking on the preferential looking trial to their looking on the final trial of the
previous familiarization phase. Specifically, we defined a pre-labeling block as the
5171 ms before the label onset, and a final-trial block as the final 5171 ms of the final
familiarization trial. For both blocks we calculated each infant’s proportion of looking
to the AOI out of total screen looks, and submitted these proportions to a two-tailed
paired samples t-test. The t-test confirmed that infants’ responses to the label on the
preferential looking trial were unlikely to be the result of fatigue (t(21) = 2.65, p <
.015): infants’ proportion AOI looking was greater for the pre-labeling block (M =
0.99, SD = 0.04) than at the end of the final familiarization trial (M = 0.92, SD =
Next, to obtain an index of infants’ responses to the label, we defined a post-
labeling time window between 233 ms and 2000 ms after label onset (Mani &
Plunkett, 2010) and a corresponding pre-labeling window as the 2000 ms immediately
preceding label onset. Twenty-one infants contributed data to the pre-labeling window
and 23 to the post-labeling window. To establish whether infants responded to the
label we conducted two analyses. First, we examined overall changes in proportion of
target looking. Infants showed no evidence of a pre-labeling target preference (M =
0.50, SD = 0.30, d = .0069; t(20) = -0.032, p = .98; all tests two-tailed). Post-labeling,
infants’ small preference for the target (M = 0.62, SD = 0.33, d = .36) did not reach
significance (t(22) = 1.73, p = .098). However, infants overall showed a small
increase in target preference from pre- to post-labeling (d = 0.42), although this
difference was not robust (t(22) = 2.00, p = .058). Next, to obtain individual response
scores, we subtracted each infant’s pre- from post-labeling proportion target looking
(Bergelson & Swingley, 2011). Scores are depicted in Figure 2. While some infants
incorrectly switched from looking at the target to the distractor after labeling (infants
ag) and some showed no response to the label, infants j w correctly increased
their target looking, in some cases substantially. Thus, inasmuch as these shifts in
attention serve as an index of having learned the label (we return to this issue in the
Discussion), this analysis suggests that at least some infants had learned a label-object
association sufficiently robust to allow them to correctly shift their attention to the
target. Critically, if learned labels affect infants’ object representations, those infants
who responded correctly should also show greater differences in looking times in the
preceding silent familiarization phase. We therefore incorporated these response
scores as a predictor in our main analysis of looking times during familiarization.
Overall looking times during familiarization are depicted in Figure 3. We
submitted infants’ looks to the stimulus (i.e. AOI looks) to a binomial mixed effects
model using the R package lme4 (version 1.1-11; Bates, Mächler, Bolker, & Walker,
2015). We included fixed effects of label (labeled = 1, unlabeled = 0), trial (1 – 8) and
response score and their two-way interactions. The three-way interaction was dropped
to achieve model convergence. Random effects were selected by fitting a maximal
random effects structure and simplifying until the model converged (Barr, Levy,
Scheepers, & Tily, 2013). The final model included by-participant intercepts. Results
are presented in Table 1.
As is typical in looking time studies, infants became less likely to look
towards the stimulus as familiarization progressed (negative main effect of trial).
While this decrease was not different for the labeled or unlabeled stimulus
(nonsignificant label by trial interaction), the odds of looking to the stimulus
decreased faster for infants with higher response scores than infants with lower
response scores (negative trial by response interaction). Higher-response infants were
also more likely to look at the stimulus overall (positive main effect of response).
Importantly, infants were overall more likely to look at the labeled than the unlabeled
stimulus. This supports our main hypothesis: whether infants had previously been
taught a label for an object affected their looking times (positive main effect of label).
Furthermore, this label effect interacted with infants’ response scores.
To understand this interaction we ran two separate binomial mixed effects
models on raw looking times to the previously labeled and unlabeled stimuli, each
with fixed effects of trial and response score and their interaction, and retaining the
same random effects structure as the previous model. When infants viewed the
unlabeled stimulus, they were less likely to look at the stimulus across trials (main
effect of trial: beta = -0.067, SE = 0.010, z = -6.73, p < .001). Response scores had no
effect (main effect: beta = 0.70, SE = 0.55, z = 1.26, p = .20; trial x response score:
beta = 0.030, SE = 0.026, z = 1.13, p = .26). Thus, whether infants had responded
correctly or incorrectly to the label after familiarization, their odds of looking at the
unlabeled stimulus were the same. When infants viewed the previously labeled
stimulus, they were also less likely to look at the stimulus across trials (main effect of
trial: beta = -0.079, SE = 0.010, z = -7.52, p < .001). Response scores had no
independent effect (main effect: beta = 0.57, SE = 0.46, z = 1.26, p = .21). However,
there was an interaction between the effect of response score and trial on infants’ odds
of looking at the stimulus (beta = -0.15, SE = 0.023, z = -6.47, p < .001). Because this
interaction involved two continuous variables, to explore it we grouped children by
response score percentiles and plotted them. As shown in Figure 4, although the
relationship between response score and looking time is complex, infants with highest
response scores initially looked for longest at the labeled object and showed a steep,
relatively smooth decline in looking, while infants with lower response scores showed
a more variable profile with a shallower decline. Overall, however, learning a label
for an object did affect infants’ looking times to that object, even when presented in
The current study explored whether infants’ learned linguistic representations
influence their learned nonlinguistic representations. Ten-month-old infants were
trained over a week by their caregivers with two 3D objects and were taught a novel
label for just one of them. When these objects were presented in silence in a looking
time task, infants spent longer looking at the previously labeled than the unlabeled
stimulus. Further, whether or not infants responded to the label on a final preferential
looking trial affected their looking times – but only when viewing the previously
labeled stimulus. Given that training and familiarization for each object were identical
except for the presence of the label during the play sessions, taken together these
finding suggest that prior label training affected infants’ responses in the silent
looking time task.
While infants looked for longer overall at the labeled stimulus during
familiarization, we found differences in looking between infants who responded
correctly to the label on the preferential looking trial and those who responded
incorrectly. Importantly, latency to respond to labeling may provide an index of
infants’ speed of verbal processing, rather than depth of lexical representation
(Fernald, Pinto, Swingley, Weinbergy, & McRoberts, 1998). Thus, it is possible that
our response scores measure intrinsic individual differences rather than whether or not
infants had learned the word. However, these contrasting patterns of looking emerged
in response to the previously labeled stimulus only; if our response scores tapped
some phenomenon unrelated to infants’ strength of label-object representations, we
would expect similar differences to emerge in response to the unlabeled stimulus.
Nonetheless, not all infants responded correctly to the label, and some shifted their
attention away from the target. We do not therefore claim that these pre-speech
infants learned a new word during training, but rather we interpret these results as
supporting accounts of early word learning in which infants learn label-object
associations incrementally (Bion, Borovsky, & Fernald, 2013; McMurray, Horst, &
Samuelson, 2012; Yurovsky, Fricker, Yu, & Smith, 2014). That is, these infants
learned something about the object, something about the label, and something about
the mapping between the two. These partial associations were then sufficient to
influence infants’ looking times. Overall, while future research is needed to delimit
the boundaries of very young infants’ word learning abilities, this study suggests that
10-month-old infants are capable of learning at least partial label-object mappings
from limited exposure.
Critically, the familiarization phase was silent, and which object had been
labeled during training was counterbalanced across participants. Thus, infants’ longer
looking times to the labeled object could only have arisen due to some kind of
difference in their representations of the two objects, and these differences can only
have been due to the presence of a label during training. Two mechanisms could
account for these results: label activation, or novelty preference.
Existing research demonstrates that infants will interact for longer with objects
in the presence of those objects’ labels (Baldwin & Markman, 1989). If seeing an
object activates its label representation, then, this activation could in turn trigger
increases in looking times. Indeed, implicit naming of silently presented images has
been demonstrated in 18-month-old infants (Mani & Plunkett, 2010). This label
activation account is compatible with theories of representational structure in which
labels and objects are represented separately, either qualitatively differently (Waxman
& Markow, 1995) or distantly in the same representational space (Westermann &
Mareschal, 2014), and become linked over experience. On these accounts, linguistic
representations are separate from nonlinguistic representations, but affect them
through association. Alternatively, the observed looking time differences could reflect
a novelty preference. Specifically, several “labels-as-features” accounts of early
representational development assume that labels initially serve as one among multiple
nonreferential features in object representations (Gliozzi, Mayor, Hu, & Plunkett,
2009; Sloutsky & Fisher, 2004; Sloutsky & Lo, 1999); for example, the word
strawberry and the color red will have the same status in a speaker’s representation of
the fruit. Thus, if a stored representation incorporates a label, then encountering the
object without the label results in an incongruent online representation (Lupyan,
2008). This incongruence evokes a novelty response – indexed in the current study by
increased looking times during familiarization to the previously labeled object.
While it is not possible to ascertain which of these two accounts is the most
plausible in the context of the data presented here, each account makes testable
predictions, pointing to future studies to help delineate between the two. First, the
implicit naming account requires an extension of Mani & Plunkett’s (2010) lexical
priming effects in 18-month-old toddlers to 10-month-old infants: if younger infants
do not activate learned labels when encountering their referents in silence, we would
expect no differences in looking time during familiarization in our study. Second, on
“labels-as-features” accounts, if a label is shared between multiple exemplars of a
category, this shared feature should increase between-exemplar similarity (see also
Westermann & Mareschal, 2014). Thus, with the current design, training infants with
a category of labeled objects and a category of unlabeled objects should provoke a
novelty preference during familiarization for the previously unlabeled object. Finally,
computational work which explicitly models these two accounts is currently
underway (Capelier-Mourguy, Twomey, & Westermann, 2016).
More broadly, the current study contributes to our understanding of the
relationship between early language learning and representational development. We
demonstrate that pre-speech infants can learn label-object associations in just one
week that are sufficiently robust to affect their subsequent looking times to these
objects when presented in silence. These findings offer converging evidence that
learning a label for an object restructures that object’s representation, and in turn
affects behavior in-the-moment, illustrating the multiple timescales at play in early
representational development.
Althaus, N., & Plunkett, K. (2015). Categorization in infancy: Labeling induces a
persisting focus on commonalities. Developmental Science.
Althaus, N., & Westermann, G. (2016). Labels constructively shape categories in 10-
month-old infants. Journal of Experimental Child Psychology.
Baldwin, D. A., & Markman, E. M. (1989). Establishing word-object relations: A first
step. Child Development Vol 60(2) Apr 1989, 381-398.
Barr, D. J., Levy, R., Scheepers, C., & Tily, H. J. (2013). Random effects structure for
confirmatory hypothesis testing: Keep it maximal. Journal of Memory and
Language, 68(3), 255–278.
Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting linear mixed-effects
models using lme4. Journal of Statistical Software, 67(1), 1 – 48.
Bergelson, E., & Swingley, D. (2012). At 6–9 months, human infants know the
meanings of many common nouns. Proceedings of the National Academy of
Sciences, 109(9), 3253–3258.
Bornstein, M. H., & Mash, C. (2010). Experience-based and on-line categorization of
objects in early infancy. Child Development, 81(3), 884–897.
Borovsky, A., Ellis, E. M., Evans, J. L., & Elman, J. L. (2015). Lexical leverage:
Category knowledge boosts real-time novel word recognition in 2-year-olds.
Developmental Science, n/a–n/a.
Capelier-Mourguy, A., Twomey, K. E., & Westermann, G. (2016). New light on the
status of labels. In Papafragou, A., Grodner, D., Mirman, D., & Trueswell,
J.C. (Eds.) (2016). Proceedings of the 38th Annual Conference of the
Cognitive Science Society. Austin, TX: Cognitive Science Society.
Clerkin, E. M., Hart, E., Rehg, J. M., Yu, C., & Smith, L. B. (2017). Real-world
visual statistics and infants’ first-learned object names. Philosophical
Transactions of the Royal Society B, 372(1711), 20160055.
Delle Luche, C., Floccia, C., Granjon, L., & Nazzi, T. (2016). Infants’ first words are
not phonetically specified: Own name recognition in British English-learning
5-month-olds. Infancy.
Dolscheid, S., Shayan, S., Majid, A., & Casasanto, D. (2013). The thickness of
musical pitch: Psychophysical evidence for linguistic relativity. Psychological
Science, 24(5), 613–621.
Fantz, R. L. (1964). Visual experience in infants: Decreased attention familar patterns
relative to novel ones. Science, 146(668-670).
Fenson, L., Dale, P. S., Reznick, J. S., Bates, E., Thal, D. J., & Pethick, S. J. (1994).
Variability in early communicative development. Monographs of the Society
for Research in Child Development, 59(5), R5–+.
Fernald, A., Pinto, J. P., Swingley, D., Weinbergy, A., & McRoberts, G. W. (1998).
Rapid gains in speed of verbal processing by infants in the 2nd year.
Psychological Science, 9(3), 228–231.
Franklin, A., Drivonikou, G. V., Clifford, A., Kay, P., Regier, T., & Davies, I. R. L.
(2008). Lateralization of categorical perception of color changes with color
term acquisition. Proceedings of the National Academy of Sciences, 105(47),
Gentner, D., & Namy, L. L. (1999). Comparison in the development of categories.
Cognitive Development, 14(4), 487–513. "##$%&''()*+),-'./+./.0'1/2234
Gliga, T., Volein, A., & Csibra, G. (2010). Verbal labels modulate perceptual object
processing in 1-year-old children. Journal of Cognitive Neuroscience, 22(12),
Gliozzi, V., Mayor, J., Hu, J. F., & Plunkett, K. (2009). Labels as features (not names)
for infant categorization: A neurocomputational approach. Cognitive Science,
33(4), 709–738.
Hohenstein, S., & Kliegl, R. (2014). remef (REMove EFfects)(version v0. 6.10).
Holmqvist, K., Nyström, M., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de
Weijer, J. (2011). Eye tracking: A comprehensive guide to methods and
measures. Oxford: Oxford University Press
Horst, J. S., & Samuelson, L. K. (2008). Fast mapping but poor retention by 24-
month-old infants. Infancy, 13(2), 128–157.
Horst, J. S., & Twomey, K. E. (2012). It’s taking shape: Shared object features
influence novel noun generalizations. Infant and Child Development, 22(1),
Hurley, K. B., & Oakes, L. M. (2015). Experience and distribution of attention: Pet
exposure and infants’ scanning of animal images. Journal of Cognition and
Development, 16(1), 11–30.
Kelly, D. J., Quinn, P. C., Slater, A. M., Lee, K., Gibson, A., Smith, M., … Pascalis,
O. (2005). Three-month-olds, but not newborns, prefer own-race faces.
Developmental Science, 8(6), F31–F36.
Lupyan, G. (2008). From chair to “chair”: A representational shift account of object
labeling effects on memory. Journal of Experimental Psychology: General,
137(2), 348–369.
Lupyan, G. (2016). The paradox of the universal triangle: concepts, language, and
prototypes. The Quarterly Journal of Experimental Psychology, 0(ja), 1–69.
Mani, N., & Plunkett, K. (2010). In the infant’s mind’s ear: Evidence for implicit
naming in 18-month-olds. Psychological Science, 21(7), 908–913.
McMurray, B., Horst, J. S., & Samuelson, L. K. (2012). Word learning emerges from
the interaction of online referent selection and slow associative learning.
Psychological Review, 119(4), 83877.
Moon, C., Lagercrantz, H., & Kuhl, P. K. (2013). Language experienced in utero
affects vowel perception after birth: a two-country study. Acta Paediatrica,
102(2), 156–160.
Oakes, L. M. (2010). Using habituation of looking time to assess mental processes in
infancy. Journal of Cognition and Development, 11(3), 255–268.
Oakes, L. M., Kovack-Lesh, K. A., & Horst, J. S. (2009). Two are better than one:
Comparison influences infants’ visual recognition memory. Journal of
Experimental Child Psychology, 104(1), 124–131.
Perry, L. K., & Samuelson, L. K. (2011). The shape of the vocabulary predicts the
shape of the bias. Frontiers in Psychology, 2, 345.
Plunkett, K., Hu, J. F., & Cohen, L. B. (2008). Labels can override perceptual
categories in early infancy. Cognition, 106(2), 665–681.
Robinson, C. W., Best, C. A., Deng, W. (Sophia), & Sloutsky, V. M. (2012). The role
of words in cognitive tasks: What, when, and how? Frontiers in Psychology,
Saffran, J. R., Aslin, R. N., & Newport, E. L. (1996). Statistical learning by 8-month-
old infants. Science, 274(5294), 1926–1928.
Shi, R., Werker, J. F., & Morgan, J. L. (1999). Newborn infants’ sensitivity to
perceptual cues to lexical and grammatical words. Cognition, 72(2), B11–B21.
Sloutsky, V. M., & Fisher, A. V. (2004). Induction and categorization in young
children: A similarity-based model. Journal of Experimental Psychology-
General, 133(2), 166–188.
Sloutsky, V. M., & Lo, Y.-F. (1999). How much does a shared name make things
similar? Part 1. Linguistic labels and the development of similarity judgment.
Developmental Psychology, 35(6), 1478–1492.
Smith, L. B., & Yu, C. (2008). Infants rapidly learn word-referent mappings via
cross-situational statistics. Cognition, 106(3), 1558–1568.
Wass, S. V., Smith, T. J., & Johnson, M. H. (2012). Parsing eye-tracking data of
variable quality to provide accurate fixation duration estimates in infants and
adults. Behavior Research Methods, 45(1), 229–250.
Waxman, S. R., & Markow, D. B. (1995). Words as invitations to form categories:
Evidence from 12-to 13-month-old infants. Cognitive Psychology, 29(3), 257–
Westermann, G., & Mareschal, D. (2014). From perceptual to language-mediated
categorization. Philosophical Transactions of the Royal Society B: Biological
Sciences, 369(1634), 20120391.
Wickham, H. (2016). ggplot2: elegant graphics for data analysis. New York:
Winawer, J., Witthoft, N., Frank, M. C., Wu, L., Wade, A. R., & Boroditsky, L.
(2007). Russian blues reveal effects of language on color discrimination.
Proceedings of the National Academy of Sciences, 104(19), 7780–7785.
Yurovsky, D., Fricker, D. C., Yu, C., & Smith, L. B. (2014). The role of partial
knowledge in statistical word learning. Psychonomic Bulletin & Review, 21(1),
Table 1. Results of mixed effects model.
.0013 **
< .001 ***
Label x trial
Label x response
< .001***
Trial x response
< .001***
Figure 1: Stimuli used in the current study.
Figure 2. Individual infants’ change in proportion target looking from the pre- to the
post-labeling phase.
a b c d e f g h i j k l m n o p q r s t u v w
Change in proportion target looking
Figure 3. Looking times to labeled and unlabeled stimuli during familiarization. Error
bars represent 95% confidence intervals after removal of random errors from the
model (Hohenstein & Kliegl, 2014).
Time (ms)
Labeled Unlabeled
Figure 4. Looking times to labeled stimulus, split by response score quartiles. Blue
line represents loess smoothing performed in the R package ggplot2 (Wickham,
Quartile 1
Quartile 2
Quartile 3
Quartile 4
2 4 6 8 2 4 6 8
Time (ms)
... They found significantly stronger gamma-band activity only in response to the previously labeled object, and this, in line with previous EEG work, was interpreted as a marker of stronger encoding of this object. Twomey and Westermann [8] extended this work by training 10-month-old infants with a label-object mapping over the course of one week. Specifically, parents trained infants with two objects during three-minute play sessions, once a day for seven days, using a label for one of the objects, but not for the other. ...
... Specifically, they support both the labels-as-features and the compound-representations theories. On the labels-as-features account, if a label is an integral part of an object's representa- Fig. 1: Looking time results from Twomey and Westermann [8]. Error bars represent 95% confidence intervals. ...
... Importantly, while the behavioral data presented in Twomey and Westermann [8] support either of these views, they cannot differentiate between the two. Computational models, on the other hand, allow researchers to explicitly test the mechanisms specified by these theories against empirical data. ...
Full-text available
The effect of labels on non-linguistic representations is the focus of substantial theoretical debate in the developmental literature. A recent empirical study demonstrated that ten-month-old infants respond differently to objects for which they know a label relative to unlabeled objects. One account of these results is that infants’ label representations are incorporated into their object representations, such that when the object is seen without its label, a novelty response is elicited. These data are compatible with two recent theories of integrated label-object representations, one of which assumes labels are features of object representations, and one which assumes labels are represented separately, but become closely associated across learning. Here, we implement both of these accounts in an auto-encoder neurocomputational model. Simulation data support an account in which labels are features of objects, with the same representational status as the objects’ visual and haptic characteristics. Then, we use our model to make predictions about the effect of labels on infants’ broader category representations. Overall, we show that the generally accepted link between internal representations and looking times may be more complex than previously thought.
... Bergelson and Swingley (2012) and Syrnyk and Meints (2017) have demonstrated word recognition for several common objects in infants at or before 9 months of age, documenting the formation of word-object associations through daily experience with language. Further, Twomey and Westermann (2018) demonstrated that 10-month-old infants could learn novel-label to novel-object mappings when trained with objects and labels over the course of a week. ...
... It is possible, however, that infants' prior familiarity with either the object or the label may have influenced the ease with which they learned the two word-object associations in the current study (Fennell, 2011;Kucker & Samuelson, 2012). Indeed, existing studies with novel objects and labels show that 10-month-old infants can learn novel word-object associations after substantially more training (Twomey & Westermann, 2018), or demonstrate rapid word-learning in older infants of 14 and 15 months of age (Schafer & Plunkett, 1998;Werker et al., 1998). Whether infants are similarly able to rapidly learn two completely novel word-object associations when the objects and the labels are completely unfamiliar remains an interesting challenge for future work. ...
Full-text available
In this series of experiments, we tested the limits of young infants’ word learning and generalization abilities in light of recent findings reporting sophisticated word learning abilities in the first year of life. Ten-month-old infants were trained with two word-object pairs and tested with either the same or different members of the corresponding categories. In Experiment 1, infants showed successful learning of the word-object associations, when trained and tested with a single exemplar from each category. In Experiment 2, infants were presented with multiple within-category items during training but failed to learn the word-object associations. In Experiment 3, infants were presented with a single exemplar from each category during training, and failed to generalize words to a new category exemplar. However, when infants were trained with items from perceptually and conceptually distinct categories in Experiment 4, they showed weak evidence for generalization of words to novel members of the corresponding categories. It is suggested that word learning in the first year begins as the formation of simple associations between words and objects that become enriched as experience with objects, words and categories accumulates across development.
... If infants were interested to learn from the agent, we expected them to attend more to the demonstrated object as compared to a distractor in a visual preference task. This pattern differs from the novelty preference found in previous studies on visual learning (Hirai, Kanakogi, & Ikeda, 2022;Thiele, Hepach, Michel, & Haun, 2021) because here the task was preceded by a labeling event, which should enhance infants' motivation to attend to the labeled object (Twomey & Westermann, 2018). In addition, if infants learned the label-object association, we expected them to look longer at the target object as compared to a distractor when hearing a familiar relative to a novel label. 1 Critically, we hypothesized that infants would preferentially attend to the target object and would recognize the object's label only if they previously saw the agent acting efficiently, but not if the agent acted inefficiently. ...
Full-text available
Infants generate basic expectations about their physical and social environment. This early knowledge allows them to identify opportunities for learning, preferring to explore and learn about objects that violate their prior expectations. However, less is known about how expectancy violations about people's actions influence infants' subsequent learning from others and about others. Here, we presented 18-month-old infants with an agent who acted either efficiently (expected action) or inefficiently (unexpected action) and then labeled an object. We hypothesized that infants would prefer to learn from the agent (label-object association) if she previously acted efficiently, but they would prefer to learn about the agent (voice-speaker association) if she previously acted inefficiently. As expected, infants who previously saw the agent acting efficiently showed greater attention to the demonstrated object and learned the new label-object association, but infants presented with the inefficient agent did not. However, there was no evidence that infants learned the voice-speaker association in any of the conditions. In summary, expectancy violations about people's actions may signal a situation to avoid learning from them. We discussed the results in relation to studies on surprise-induced learning, motionese, and selective social learning, and we proposed other experimental paradigms to investigate how expectancy violations influence infants' learning about others.
... While those effects were reported for statistical learning in non-linguistic sound sequences (Conway & Christiansen, 2005) or verbal material without any meaning attached to it (Emberson et al., 2011), it could be the case that the pairing of objects and linguistic labels yields different effects. Many studies in adults and also in infants have shown that the processing of verbal labels impacts on the processing of visual objects (e.g., Althaus & Westermann, 2016;Boutonnet & Lupyan, 2015;Calignano et al., 2021;Lupyan et al., 2020;Twomey & Westermann, 2018) at both lower and higher levels of perceptual and cognitive processing. This has been attributed to the capacity of language to induce a shift towards the processing of category-related information that draws, in consequence, attention towards different features of the processed stimuli (Lupyan et al., 2020;Lupyan & Bergen, 2016). ...
Despite humans’ ability to communicate about concepts relating to different senses, word learning research tends to largely focus on labeling visual objects. Although sensory modality is known to influence memory and learning, its specific role for word learning remains largely unclear. We investigated associative word learning in adults, that is the association of an object with its label, by means of event-related brain potentials (ERPs). We evaluated how learning is affected by object modality (auditory vs. visual) and temporal synchrony of object-label presentations (sequential vs. simultaneous). Across 4 experiments, adults were, in training phases, presented either visual objects (real-world images) or auditory objects (environmental sounds) in temporal synchrony with or followed by novel pseudowords (2 x 2 design). Objects and pseudowords were paired either in a consistent or an inconsistent manner. In subsequent testing phases, the consistent pairs were presented in matching or violated pairings. Here, behavioral and ERP responses should reveal whether consistent object-pseudoword pairs had been successfully associated with one another during training. The visual-object experiments yielded behavioral learning effects and an increased N400 amplitude for violated vs. matched pairings indicating short-term retention of object-word associations, in both the simultaneous and sequential presentation conditions. For the auditory-object experiments, only the simultaneous, but not the sequential presentation, revealed similar results. Across all experiments, we found behavioral and ERP correlates of associative word learning to be affected by both sensory modality and partly, by temporal synchrony of object-label combinations. Based on our findings, we argue for independent advantages of temporal synchrony and visual modality in associative word learning.
... Words discovery may have allowed word-image associations learning. Indeed, in infants before the first year of age, labels benefit a number of nonlinguistic representations such as object representation 27 and object category formation 28 . Nevertheless, we cannot discard the possibility that the learning of word-visual referent associations we observed here could have been facilitated by the information provided by the visual cues or by the multimodal processing. ...
Full-text available
Before the 6-months of age, infants succeed to learn words associated with objects and actions when the words are presented isolated or embedded in short utterances. It remains unclear whether such type of learning occurs from fluent audiovisual stimuli, although in natural environments the fluent audiovisual contexts are the default. In 4 experiments, we evaluated if 8-month-old infants could learn word-action and word-object associations from fluent audiovisual streams when the words conveyed either vowel or consonant harmony, two phonological cues that benefit word learning near 6 and 12 months of age, respectively. We found that infants learned both types of words, but only when the words contained vowel harmony. Because object- and action-words have been conceived as rudimentary representations of nouns and verbs, our results suggest that vowels contribute to shape the initial steps of the learning of lexical categories in preverbal infants.
... For example, variability supports category learning in infants and adults (Goldenberg & Sandhofer, 2013;Smith & Handy, 2014). Infants show better retention of object labels if they experi- ence object-label mappings against a changing background across trials, rather than one that remains constant across trials (Twomey & Westermann, 2018). In learning paradigms with adults, variation in context is associated with better learning of grammatical class (Redington, Chater, & Finch, 1998) and referent mappings (Smith & Yu, 2008). ...
We examined whether variations in contextual diversity, spacing, and retrieval practice influenced how well adults learned new words from reading experience. Eye movements were recorded as adults read novel words embedded in sentences. In the learning phase, unfamiliar words were presented either in the same sentence repeated four times (same context) or in four different sentences (diverse context). Spacing was manipulated by presenting the sentences under distributed or non‐distributed practice. After learning, half of the participants were asked to retrieve the new words, and half had an extra exposure to the new words. Although words experienced in diverse contexts were acquired more slowly during learning, they enjoyed a greater benefit of learning at immediate posttest. Distributed practice also slowed learning, but no benefit was observed at posttest. Although participants who had an extra exposure showed the greatest learning benefit overall, learning also benefited from retrieval opportunity, when words were experienced in diverse contexts. These findings demonstrate that variation in the content and structure of the learning environment impacts on word learning via reading.
... Effects of learning history also emerge when infants' experience is controlled experimentally. For example, after a week of training with one named and one unnamed novel object, ten-month-old infants exhibited increased visual sampling of the previously named object in a subsequent silent looking-time task (Twomey & Westermann, 2017;see also Bornstein & Mash, 2010;Gliga, Volein & Csibra, 2010). Thus, learning depends on the interaction between what infants encounter in-the-moment andwhat they know (Thelen & Smith, 1994). ...
Full-text available
Infants are curious learners who drive their own cognitive development by imposing structure on their learning environment as they explore. Understanding the mechanisms by which infants structure their own learning is therefore critical to our understanding of development. Here we propose an explicit mechanism for intrinsically motivated information selection that maximizes learning. We first present a neurocomputational model of infant visual category learning, capturing existing empirical data on the role of environmental complexity on learning. Next we “set the model free”, allowing it to select its own stimuli based on a formalization of curiosity and three alternative selection mechanisms. We demonstrate that maximal learning emerges when the model is able to maximize stimulus novelty relative to its internal states, depending on the interaction across learning between the structure of the environment and the plasticity in the learner itself. We discuss the implications of this new curiosity mechanism for both existing computational models of reinforcement learning and for our understanding of this fundamental mechanism in early development.
Full-text available
We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present—a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’.
Full-text available
How do infants’ emerging language abilities affect their organization of objects into categories? The question of whether labels can shape the early perceptual categories formed by young infants has received considerable attention, but evidence has remained inconclusive. Here, 10-month-old infants (N = 80) were familiarized with a series of morphed stimuli along a continuum that can be seen as either one category or two categories. Infants formed one category when the stimuli were presented in silence or paired with the same label, but they divided the stimulus set into two categories when half of the stimuli were paired with one label and half with another label. Pairing the stimuli with two different nonlinguistic sounds did not lead to the same result. In this case, infants showed evidence for the formation of a single category, indicating that nonlinguistic sounds do not cause infants to divide a category. These results suggest that labels and visual perceptual information interact in category formation, with labels having the potential to constructively shape category structures already in preverbal infants, and that nonlinguistic sounds do not have the same effect.
Full-text available
Recent studies with infants and adults demonstrate a facilitative role of labels in object categorization. A common interpretation is that labels highlight commonalities between objects. However, direct evidence for such a mechanism is lacking. Using a novel object category with spatially separate features that are either of low or high variability across the stimulus set, we tracked 12-month-olds' attention to object features during learning and at test. Learning occurred in both conditions, but what was learned depended on whether or not labels were heard. A detailed analysis of eye movements revealed that infants in the two conditions employed different object processing strategies. In the silent condition, looking patterns were governed exclusively by the variability of object parts. In the label condition, infants' categorization performance was linked to their relative attention to commonalities. Moreover, the commonality focus persisted after learning even in the absence of labels. These findings constitute the first experimental evidence that labels induce a persistent focus on commonalities.
This new edition to the classic book by ggplot2 creator Hadley Wickham highlights compatibility with knitr and RStudio. ggplot2 is a data visualization package for R that helps users create data graphics, including those that are multi-layered, with ease. With ggplot2, it's easy to: • produce handsome, publication-quality plots with automatic legends created from the plot specification • superimpose multiple layers (points, lines, maps, tiles, box plots) from different data sources with automatically adjusted common scales • add customizable smoothers that use powerful modeling capabilities of R, such as loess, linear models, generalized additive models, and robust regression • save any ggplot2 plot (or part thereof) for later modification or reuse • create custom themes that capture in-house or journal style requirements and that can easily be applied to multiple plots • approach a graph from a visual perspective, thinking about how each component of the data is represented on the final plot This book will be useful to everyone who has struggled with displaying data in an informative and attractive way. Some basic knowledge of R is necessary (e.g., importing data into R). ggplot2 is a mini-language specifically tailored for producing graphics, and you'll learn everything you need in the book. After reading this book you'll be able to produce graphics customized precisely for your problems, and you'll find it easy to get graphics out of your head and on to the screen or page. New to this edition:< • Brings the book up-to-date with ggplot2 1.0, including major updates to the theme system • New scales, stats and geoms added throughout • Additional practice exercises • A revised introduction that focuses on ggplot() instead of qplot() • Updated chapters on data and modeling using tidyr, dplyr and broom
By the end of their first year of life, infants’ representations of familiar words contain phonetic detail; yet little is known about the nature of these representations at the very beginning of word learning. Bouchon et al. (2015) showed that French-learning 5-month-olds could detect a vowel change in their own name and not a consonant change, but also that infants reacted to the acoustic distance between vowels. Here, we tested British English-learning 5-month-olds in a similar study to examine whether the acoustic/phonological characteristics of the native language shape the nature of the acoustic/phonetic cues that infants pay attention to. In the first experiment, British English-learning infants failed to recognize their own name compared to a mispronunciation of initial consonant (e.g., Molly versus Nolly) or vowel (e.g., April versus Ipril). Yet in the second experiment, they did so when the contrasted name was phonetically dissimilar (e.g., Sophie versus Amber). Differences in phoneme category (stops versus continuants) between the correct consonant versus the incorrect one significantly predicted infants’ own name recognition in the first experiment. Altogether, these data suggest that infants might enter into a phonetic mode of processing through different paths depending on the acoustic characteristics of their native language.
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholin-guistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the standards that have been in place for many decades. Through theoretical arguments and Monte Carlo simulation, we show that LMEMs generalize best when they include the maximal random effects structure justified by the design. The generalization performance of LMEMs including data-driven random effects structures strongly depends upon modeling criteria and sample size, yielding reasonable results on moderately-sized samples when conservative criteria are used, but with little or no power advantage over maximal models. Finally, random-intercepts-only LMEMs used on within-subjects and/or within-items data from populations where subjects and/or items vary in their sensitivity to experimental manipulations always generalize worse than separate F 1 and F 2 tests, and in many cases, even worse than F 1 alone. Maximal LMEMs should be the 'gold standard' for confirmatory hypothesis testing in psycholinguistics and beyond.
For over 300 years, the humble triangle has served as the paradigmatic example of the problem of abstraction. How can we have the idea of a general triangle even though every experience with triangles is with specific ones? Classical cognitive science seemed to provide an answer in symbolic representation. With its easily enumerated necessary and sufficient conditions, the triangle would appear to be a ideal candidate for being represented in a symbolic form. I show that it is not. Across a variety of tasks-drawing, speeded recognition, unspeeded visual judgments, and inference-representations of triangles appear to be graded and context-dependent. I show that using the category name "triangle" activates a more prototypical representation than using an arguably coextensive cue, "three-sided polygon." For example, when asked to draw "triangles" people draw more typical triangles than when asked to draw "three-sided polygons." Altogether, the results support the view that (even formal) concepts have a graded and flexible structure which takes on a more prototypical and stable form when activated by category labels.
Recent research suggests that infants tend to add words to their vocabulary that are semantically related to other known words, though it is not clear why this pattern emerges. In this paper, we explore whether infants leverage their existing vocabulary and semantic knowledge when interpreting novel label-object mappings in real time. We initially identified categorical domains for which individual 24-month-old infants have relatively higher and lower levels of knowledge, irrespective of overall vocabulary size. Next, we taught infants novel words in these higher and lower knowledge domains and then asked if their subsequent real-time recognition of these items varied as a function of their category knowledge. While our participants successfully acquired the novel label-object mappings in our task, there were important differences in the way infants recognized these words in real time. Namely, infants showed more robust recognition of high (vs. low) domain knowledge words. These findings suggest that dense semantic structure facilitates early word learning and real-time novel word recognition.
Although infants' cognitions about the world must be influenced by experience, little research has directly assessed the relation between everyday experience and infants' visual cognition in the laboratory. Eye-tracking procedures were used to measure 4-month-old infants' eye-movements as they visually investigated a series of images. Infants with pet experience (N = 27) directed a greater proportion of their looking at the most informative region of animal stimuli-the head-than did infants without such experience (N = 21); the two groups of infants did not differ in their scanning of images of human faces or vehicles. Thus, infants' visual cognitions are influenced by everyday experience, and theories of cognitive development in infancy must account for the effect of experience on development.