Article

Enhanced Attention to Speaking Faces Versus Other Event Types Emerges Gradually Across Infancy

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The development of attention to dynamic faces versus objects providing synchronous audiovisual versus silent visual stimulation was assessed in a large sample of infants. Maintaining attention to the faces and voices of people speaking is critical for perceptual, cognitive, social, and language development. However, no studies have systematically assessed when, if, or how attention to speaking faces emerges and changes across infancy. Two measures of attention maintenance, habituation time (HT) and look-away rate (LAR), were derived from cross-sectional data of 2- to 8-month-old infants (N � 801). Results indicated that attention to audiovisual faces and voices was maintained across age, whereas attention to each of the other event types (audiovisual objects, silent dynamic faces, silent dynamic objects) declined across age. This reveals a gradually emerging advantage in attention maintenance (longer HTs, lower LARs) for audiovisual speaking faces compared with the other 3 event types. At 2 months, infants showed no attentional advantage for faces (with greater attention to audiovisual than to visual events); at 3 months, they attended more to dynamic faces than objects (in the presence or absence of voices), and by 4 to 5 and 6 to 8 months, significantly greater attention emerged to temporally coordinated faces and voices of people speaking compared with all other event types. Our results indicate that selective attention to coordinated faces and voices over other event types emerges gradually across infancy, likely as a function of experience with multimodal, redundant stimulation from person and object events.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... This challenging task is heavily scaffolded by interaction with caretakers during infancy (Gogate et al., 2001;Mundy & Burnette, 2005). Dynamic faces of people speaking are highly salient to young infants (e.g., Bahrick et al., 2016;Courage et al., 2006). Focusing attention on the face of a speaker provides a rich source of language learning opportunities for infants. ...
... First, attention to social information provides the input for language development. Further, research has shown that attention to audiovisual speech events increases gradually across infancy whereas attention to nonsocial audiovisual events declines (Bahrick et al., 2016). This increase across age in attention to social events may be due to several factors including social scaffolding of language by caretakers and social interaction, which in turn, leads infants to develop increasingly greater expertise in the domain of social events. ...
... Second, social events are typically more complex and variable than nonsocial events (Adolphs, 2001;Dawson et al., 2004). They provide an extraordinary amount of intersensory redundancy from rapidly changing coordinated patterns across face, voice, and gesture, making them more demanding of attentional resources than typical nonsocial events (Bahrick et al., 2016;Bahrick & Todd, 2012). Thus, task difficulty/complexity may be a significant factor in determining which contexts or protocols best predict outcomes at different ages across development. ...
Article
Full-text available
Parent language input is a well-established predictor of child language development. Multisensory attention skills (MASks; intersensory matching, shifting and sustaining attention to audiovisual speech) are also known to be foundations for language development. However, due to a lack of appropriate measures, individual differences in these skills have received little research focus. A newly established measure, the Multisensory Attention Assessment Protocol (MAAP), allows researchers to examine predictive relations between early MASks and later outcomes. We hypothesized that, along with parent language input, multisensory attention to social events (faces and voices) in infancy would predict later language outcomes. We collected data from 97 children (predominantly White and Hispanic, 48 males) participating in an ongoing longitudinal study assessing 12-, 18-, and 24-month MASks (MAAP) and parent language input (quality, quantity), and 18- and 24-month language outcomes (child speech production, vocabulary size). Results revealed 12-month intersensory matching (but not maintaining or shifting attention) of faces and voices in the presence of a distractor was a strong predictor of language. It predicted a variety of 18- and 24-month child language outcomes (expressive vocabulary, child speech production), even when holding traditional predictors constant: parent language input and SES (maternal education: 52% bachelor's degree or higher). Further, at each age, parent language input predicted just one outcome, expressive vocabulary, and SES predicted child speech production. These novel findings reveal infant intersensory matching of faces and voices in the presence of a distractor can predict which children might benefit most from parent language input and show better language outcomes. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... Within hours after birth they recognize and prefer their mother's face over a stranger's, but only if she has spoken to them . By four to eight months of age, typically developing infants attend to talking faces for a longer time than audio-visual objects or moving silent faces (Bahrick, Todd, Castellanos, & Sorondo, 2016). ...
... Among non-NICU infants, multi-modal stimuli are typically more salient than unimodal stimuli (for reviews : Tomalski, 2015;Reynolds & Roth, 2018), and audio-visual synchrony, such as in a talking face, can be detected even at early ages (e.g., Guellaï, Streri, Chopin, Rider, & Kitamura, 2016;Hyde, Jones, Flo, & Porter, 2011;Kuhl, & Meltzoff, 1984;Teinonen, Aslin, Alku, & Csibra, 2008). Posing questions similar to ours, Bahrick and colleagues (Bahrick, et al., 2016) examined multi-modality effects on social and non-social stimuli among non-NICU infants. Talking/silently moving faces were their social stimuli, and sound producing/silently moving inanimate objects the non-social stimuli. ...
... Given the frequency of disassociated moving visual stimuli from accompanying synchronized auditory stimuli in the NICU environment, and in contrast to Bahrick et al. (2016), we included both moving and static faces in the current study to allow assessment of the additive effects of audio-visual synchrony, social content, and motion. Thus, we presented static silent faces, static faces with simultaneous vocalization, moving silent faces, and moving faces with synchronized vocalization. ...
Article
Full-text available
NICU infants are reported to have diminished social orientation and increased risk of socio-communicative disorders. In this eye tracking study, we used a preference for upright compared to inverted faces as a gauge of social interest in high medical risk full- and pre-term NICU infants. We examined the effects of facial motion and audio-visual redundancy on face and eye/mouth preferences across the first year. Upright and inverted baby faces were simultaneously presented in a paired-preference paradigm with motion and synchronized vocalization varied. NICU risk factors including birth weight, sex, and degree of CNS injury were examined. Overall, infants preferred the more socially salient upright faces, making this the first report, to our knowledge, of an upright compared to inverted face preference among high medical risk NICU infants. Infants with abnormalities on cranial ultrasound displayed lower social interest, i.e. less of a preferential interest in upright faces, when viewing static faces. However, motion selectively increased their upright face looking time to a level equal that of infants in other CNS injury groups. We also observed an age-related sex effect suggesting higher risk in NICU males. Females increased their attention to the mouth in upright faces across the first year, especially between 7–10 months, but males did not. Although vocalization increased diffuse attention toward the screen, contrary to our predictions, there was no evidence that the audio-visual redundancy embodied in a vocalizing face focused additional attention on upright faces or mouths. This unexpected result may suggest a vulnerability in response to talking faces among NICU infants that could potentially affect later verbal and socio-communicative development.
... This indicates that visual habituation relates to the development of more complex cognitive abilities. In addition, evidence from prior studies suggests that 'look-away rate'-brief gaze shifts away from the target stimulus during habituation-reflects processing efficiency and attentional control [32]. Higher look-away rates correspond to shorter look durations, and prior research has found that infants who demonstrate shorter looks during habituation are faster and more efficient at encoding information [33]. ...
... Because trials only ended when infants looked away for more than one second (or until the 20s maximum) infants could make brief gaze shifts away from the screen (<1s) which did not end the trial. Look-away rate per minute was calculated as the total number of looks away from the stimulus during habituation, divided by the total looking time, and multiplied by 60s [32]. Lastly, we calculated the difference in mean looking time between familiar and novel trials during the dishabituation phase to yield a score that reflects infants' preference for the novel items. ...
... In the literature, the number of trials to reach habituation and dishabituation-i.e., novelty preference-has been shown to predict IQ later in development (for a meta-analysis, see [41]). Look-away rate has been shown to increase across age and to be significantly negatively correlated with habituation length, suggesting that these two indices of attention are tightly coupled during infancy [32,33]. Next, to examine potential differences in the habituation slopes (i.e., the change in looking time from one trial to the next), we conducted a growth curve analysis [42] of the first four habituation trials. ...
Article
Full-text available
Early cognitive development relies on the sensory experiences that infants acquire as they explore their environment. Atypical experience in one sensory modality from birth may result in fundamental differences in general cognitive abilities. The primary aim of the current study was to compare visual habituation in infants with profound hearing loss, prior to receiving cochlear implants (CIs), and age-matched peers with typical hearing. Two complementary measures of cognitive function and attention maintenance were assessed: the length of time to habituate to a visual stimulus, and look-away rate during habituation. Findings revealed that deaf infants were slower to habituate to a visual stimulus and demonstrated a lower look-away rate than hearing infants. For deaf infants, habituation measures correlated with language outcomes on standardized assessments before cochlear implantation. These findings are consistent with prior evidence suggesting that habituation and look-away rates reflect efficiency of information processing and may suggest that deaf infants take longer to process visual stimuli relative to the hearing infants. Taken together, these findings are consistent with the hypothesis that hearing loss early in infancy influences aspects of general cognitive functioning.
... According to the intersensory redundancy hypothesis (IRH; Bahrick & Lickliter, 2000, 2014, redundant temporally synchronous information guides selective attention at the expense of nonredundant information in early development, facilitating the perception, learning, and memory of amodal properties (e.g., rhythm, tempo, intensity) of events at the expense of unimodally specified properties. In addition, events with social stimuli, when compared with events with nonsocial stimuli, provide a greater amount of redundant information (across face and voice) that attracts and sustains attention (Bahrick, 2010;Bahrick, Todd, Castellanos, & Sorondo, 2016). The intersensory redundancy provided by events with social stimuli is particularly useful for guiding attention during the first year of life, as infant attention shifts from being exogenous or event driven during the first 6 months to being more endogenous or internally controlled toward the end of the first year (Colombo & Cheatham, 2006;Ruff & Rothbart, 1996). ...
... For example, Courage, Reynolds, and Richards (2006) found greater behavioral attention (longer looks) and physiological attention (greater changes in heart rate) to silent dynamic events that were social (Sesame Street scenes) compared with nonsocial (geometric patterns). Recently, Bahrick et al. (2016) extended this work to multisensory naturalistic events by examining attention to audiovisual faces and voices, audiovisual objects, silent dynamic faces, and silent dynamic objects. They found greater attentional maintenance to speaking faces compared with other event types for 4-and 5-month-olds and 6-to 8-month-olds. ...
... Infants' average and peak look durations were longer to the dynamic multimodal speaking face compared with the tapping hammer regardless of age or condition. These results support the well-established social preference that infants develop by 4 or 5 months of age (Bahrick et al., 2016;Bahrick, 2010;Courage et al., 2006). In addition to behavioral differences, infants' HR was significantly lower during the session for the social stimulus compared with the nonsocial stimulus; lower HR is often associated with more active engagement and stimulus processing. ...
Article
Attention is a state of readiness or alertness, associated with behavioral and psychophysiological responses, that facilitates learning and memory. Multisensory and dynamic events have been shown to elicit more attention and produce greater sustained attention in infants than auditory or visual events alone. Such redundant and often temporally synchronous information guides selectivity and facilitates perception, learning, and memory of properties of events specified by redundancy. In addition, events involving faces or other social stimuli provide an extraordinary amount of redundant information that attracts and sustains attention. In the current study, 4- and 8-month-old infants were shown 2-min multimodal videos featuring social or nonsocial stimuli to determine the relative roles of synchrony and stimulus category in inducing attention. Behavioral measures included average looking time and peak look duration, and convergent measurement of heart rate (HR) allowed for the calculation of HR-defined phases of attention: Orienting (OR), sustained attention (SA), and attention termination (AT). The synchronous condition produced an earlier onset of SA (less time in OR) and a deeper state of SA than the asynchronous condition. Social stimuli attracted and held attention (longer duration of peak looks and lower HR than nonsocial stimuli). Effects of synchrony and the social nature of stimuli were additive, suggesting independence of their influence on attention. These findings are the first to demonstrate different HR-defined phases of attention as a function of intersensory redundancy, suggesting greater salience and deeper processing of naturalistic synchronous audiovisual events compared with asynchronous ones.
... Additionally, by presenting infants with dynamic stimuli, it is possible to understand the effects of intersensory redundancy on attention to and recognition of faces. Beyond 4 months of age, infants have been shown to demonstrate greater attention to synchronous, multimodal faces than silent faces, synchronous, multimodal objects, or silent objects (Bahrick et al., 2016). ...
... We had multiple hypotheses for Experiment 1. First, infants in the experimental condition were expected to demonstrate sensitivity to synchrony and attraction to the synchronous familiarization stimulus (e.g., Bahrick et al., 2016Bahrick et al., , 2018Curtindale et al., 2019), indicated by increased looking to the synchronous face during familiarization. Because the amodal stimulus properties are most salient, we expected that infants would be prevented from attending to modality-specific face properties, discouraging recognition of the synchronous familiar face during the VPC trials. ...
Article
Full-text available
This study examined the role of intersensory redundancy on 12-month-old infants’ attention to and processing of face stimuli. Two experiments were conducted. In Experiment 1, 72 12-month-olds were tested using an online platform called Lookit. Infants were familiarized with two videos of an actor reciting a children’s story presented simultaneously. A soundtrack either matched one of the videos (experimental condition) or neither of the videos (control condition). Visual-paired comparison (VPC) trials were completed to measure looking preferences for the faces presented synchronously and asynchronously during familiarization and for novel faces. Neither group displayed looking preferences during the VPC trials. It is possible that the complexity of the familiarization phase made the modality-specific face properties (i.e., facial characteristics and configuration) difficult to process. In Experiment 2, 56 12-month-old infants were familiarized with the video of only one actor presented either synchronously or asynchronously with the soundtrack. Following familiarization, participants completed a VPC procedure including the familiar face and a novel face. Results from Experiment 2 showed that infants in the synchronous condition paid more attention during familiarization than infants in the asynchronous condition. Infants in the asynchronous condition demonstrated recognition of the familiar face. These findings suggest that the competing face stimuli in the Experiment 1 were too complex for the facial characteristics to be processed. The procedure in Experiment 2 led to increased processing of the face in the asynchronous presentation. These results indicate that intersensory redundancy in the presentation of synchronous audiovisual faces is very salient, discouraging the processing of modality-specific visual properties. This research contributes to the understanding of face processing in multimodal contexts, which have been understudied, although a great deal of naturalistic face exposure occurs multimodally.
... Additionally, infants are shown to develop a preference to fixate to speaking faces above silent dynamic faces from 2 to 8 months, such that older infants increase their looks to speaking faces and decrease their looking away rates (Bahrick et al., 2016). Thus, the development of preference for specific faces appears to be robust across tasks; infants' increased sensitivity to certain faces moves toward faces to which infants are frequently exposed in their daily environment. ...
... Thus, the development of preference for specific faces appears to be robust across tasks; infants' increased sensitivity to certain faces moves toward faces to which infants are frequently exposed in their daily environment. Infants' gazes also increase toward more communicative faces over the course of development, while their gazes to less communicative faces remain similar (Bahrick et al., 2016). This preference to specific types of faces could be an index of perceptual learning, which refers to an increased sensitivity to specific faces frequently present in the environment and decreased sensitivity to others (Maurer and Werker, 2014). ...
Article
Full-text available
Infants acquire their first words through interactions with social partners. In the first year of life, infants receive a high frequency of visual and auditory input from faces, making faces a potential strong social cue in facilitating word-to-world mappings. In this position paper, we review how and when infant gaze to faces is likely to support their subsequent vocabulary outcomes. We assess the relevance of infant gaze to faces selectively, in three domains: infant gaze to different features within a face (that is, eyes and mouth); then to faces (compared to objects); and finally to more socially relevant types of faces. We argue that infant gaze to faces could scaffold vocabulary construction, but its relevance may be impacted by the developmental level of the infant and the type of task with which they are presented. Gaze to faces proves relevant to vocabulary, as gazes to eyes could inform about the communicative nature of the situation or about the labeled object, while gazes to the mouth could improve word processing, all of which are key cues to highlighting word-to-world pairings. We also discover gaps in the literature regarding how infants’ gazes to faces (versus objects) or to different types of faces relate to vocabulary outcomes. An important direction for future research will be to fill these gaps to better understand the social factors that influence infant vocabulary outcomes.
... Although prior research has investigated differences in looking to audiovisual versus visual stimulation, particularly in infants (Bahrick, Todd, Castellanos, & Sorondo, 2016;Reynolds, Zhang, & Guy, 2013), this research has focused primarily on measures of look duration. Little research has characterized differences in speed, scanning patterns, or strategies for exploring audiovisual and visual stimulation. ...
... At the same time, the protocol is simple enough for infants and children to show reliable intersensory matching. Further, despite using a common protocol, participants of different ages will likely show selective attention to different properties of the events and use different processing strategies (see Bahrick, 2001;Bahrick et al., 2016;Franchak, Heeger, Hasson, & Adolph, 2016;Frank, Vul, & Saxe, 2012). For older children and adults, the linguistic content of the social events may be more salient as their attention and language skills mature and affect speech perception and audiovisual processing (for related ideas see Bowerman & Levinson, 2001;Frank et al., 2012). ...
Article
Full-text available
Detecting intersensory redundancy guides cognitive, social, and language development. Yet, researchers lack fine-grained, individual difference measures needed for studying how early intersensory skills lead to later outcomes. The intersensory processing efficiency protocol (IPEP) addresses this need. Across a number of brief trials, participants must find a sound-synchronized visual target event (social, nonsocial) amid five visual distractor events, simulating the “noisiness” of natural environments. Sixty-four 3- to 5-year-old children were tested using remote eye-tracking. Children showed intersensory processing by attending to the sound-synchronous event more frequently and longer than in a silent visual control, and more frequently than expected by chance. The IPEP provides a fine-grained, nonverbal method for characterizing individual differences in intersensory processing appropriate for infants and children.
... Because the perception of facial expressions of interaction partners is crucial for the sensorimotor and social development of infants and toddlers (Bahrick et al., 2016), the wearing of face masks in ECEC centres has been critically discussed, but there is a lack of research analysing their effects on children or relationships in ECEC centres. Due to the mentioned characteristics of high-quality interactions between staff, children, and parents, we hypothesize that measures that restrict staff's behavior in relation to warm, sensitive behavior (e.g., smiling at the child) or positive physical contact with children may have a negative effect on staff-child interactions. ...
Article
Full-text available
During the COVID-19 pandemic, early childhood education and care (ECEC) centres implemented various protective and hygiene measures. Some of these, such as maintaining distance or wearing face masks, temporarily restricted interactions between pedagogical staff, children, and parents. This may have made it difficult for staff to provide high-quality interactions with positive and sensitive attitudes towards children and parents. The long-term effects of these distancing measures on the quality of daily interactions in ECEC centres have been largely unexplored. Based on a panel survey of German ECEC centre leaders conducted over a period of one and a half years, we used random-effect-within-between models to provide a long-term assessment of the effects of specific protective measures on different levels of interactions within ECEC centres. These levels include staff-child interactions, interactions between children, and cooperation between staff and parents. Our findings indicate that child-child interactions were largely unaffected by the measures, while staff-parent interactions suffered the most. Communication with parents and regular implementation of pedagogical practices had a stabilizing effect, while keeping distance from children, wearing face masks, and (pandemic-related) staff shortages worsened staff-child interactions. Additionally, our findings revealed that adopting a stricter group concept was associated with improved staff-child interactions. Centers that had previously used an open group concept reported lower quality interactions during the pandemic. This study provides valuable insights into the effects of protective measures on daily interactions in ECEC centres, highlighting the importance of considering both short-term and long-term effects when implementing protective measures.
... Intersensory processing of social, but not nonsocial, events may predict language outcomes because social interactions provide the context in which language learning opportunities most often occur. Further, compared to nonsocial events, social events provide an extraordinary amount of intersensory redundancy across face, voice, and gesture (Bahrick et al., 2016;Bahrick & Todd, 2012) and are typically more complex and variable than nonsocial events (Adolphs, 2001;Dawson et al., 2004), thus demanding greater attentional resources from infants. It may be that the challenge of processing social events on the IPEP is more optimally matched to the processing capabilities of 6-month-olds, resulting in meaningful variability across infants that is related to language outcomes. ...
Article
Full-text available
Intersensory processing of social events (e.g., matching sights and sounds of audiovisual speech) is a critical foundation for language development. Two recently developed protocols, the Multisensory Attention Assessment Protocol (MAAP) and the Intersensory Processing Efficiency Protocol (IPEP), assess individual differences in intersensory processing at a sufficiently fine-grained level for predicting developmental outcomes. Recent research using the MAAP demonstrates 12-month intersensory processing of face-voice synchrony predicts language outcomes at 18- and 24-months, holding traditional predictors (parent language input, SES) constant. Here, we build on these findings testing younger infants using the IPEP, a more comprehensive, fine-grained index of intersensory processing. Using a longitudinal sample of 103 infants, we tested whether intersensory processing (speed, accuracy) of faces and voices at 3- and 6-months predicts language outcomes at 12-, 18-, 24-, and 36-months, holding traditional predictors constant. Results demonstrate intersensory processing of faces and voices at 6-months (but not 3-months) accounted for significant unique variance in language outcomes at 18-, 24-, and 36-months, beyond that of traditional predictors. Findings highlight the importance of intersensory processing of face-voice synchrony as a foundation for language development as early as 6-months and reveal that individual differences assessed by the IPEP predict language outcomes even 2.5-years later.
... While the protective effect of specific measures (above all ventilation, group separation, vaccination) on the spread of COVID-19 infections in ECEC centres had been confirmed (Neuberger et al., 2022b,a), the effects of the measures on pedagogical practices and interactions had not yet been sufficiently studied, especially with regard to its mediumand long-term effects over the last 2.5 years of the pandemic. Because of the great importance of the perception of facial expressions of interaction partners for the sensorimotor and social development of infants and toddlers (Bahrick et al., 2016), the wearing of face masks in ECEC centres had been critically discussed, but there is a lack of research analysing their effects on children or relationships in ECEC centres. ...
Preprint
Full-text available
Early Education and Care (ECEC) centres had implemented a variety of protective and hygiene measures during the COVID-19 pandemic. Some of these measures temporarily restricted the behaviour of pedagogical staff, children and parents, for example keeping distance from each other or wearing face masks. This may have made it difficult for staff to offer high quality interactions with a positive, sensitive attitude towards children and parents, as would be important for good pedagogical work. Long-term effects of these distance measures on the quality of daily interactions in ECEC centres are largely unexplored. Based on a panel survey among German ECEC centre leaders over a period of one and a half years, we provide a long-term assessment of the impact of specific protective measures on different levels of interactions within ECEC centres, namely on staff-child interactions, interactions of children with each other and the cooperation between staff and parents. We found child-child interaction largely unaffected by the measures, while staff-parent interaction suffered the most. Communication with parents and regular implementation of pedagogical practices have a stabilizing effect, while keeping distance from children, face masks and (pandemic-related) staff shortages worsen staff-child interactions.
... These rudimentary audiovisual processing skills are further refined with experience in the natural world [6][7][8] . Infants improve their ability to differentiate and pick up perceptual information provided by faces and voices with increased exposure to audiovisual speech [9][10][11][12][13] . This improvement in multisensory processing of familiar types of faces and voices often encountered in the infant's native environment is related to a developmental process known as perceptual narrowing 6,7,[14][15][16] . ...
Article
Full-text available
The current study utilized eye-tracking to investigate the effects of intersensory redundancy and language on infant visual attention and detection of a change in prosody in audiovisual speech. Twelve-month-old monolingual English-learning infants viewed either synchronous (redundant) or asynchronous (non-redundant) presentations of a woman speaking in native or non-native speech. Halfway through each trial, the speaker changed prosody from infant-directed speech (IDS) to adult-directed speech (ADS) or vice versa. Infants focused more on the mouth of the speaker on IDS trials compared to ADS trials regardless of language or intersensory redundancy. Additionally, infants demonstrated greater detection of prosody changes from IDS speech to ADS speech in native speech. Planned comparisons indicated that infants detected prosody changes across a broader range of conditions during redundant stimulus presentations. These findings shed light on the influence of language and prosody on infant attention and highlight the complexity of audiovisual speech processing in infancy.
... These rudimentary audiovisual processing skills are further refined with experience in the natural world [6][7][8] . Infants improve their ability to differentiate and pick up perceptual information provided by faces and voices with increased exposure to audiovisual speech [9][10][11][12][13] . This improvement in multisensory processing of familiar types of faces and voices often encountered in the infant's native environment is related to a developmental process known as perceptual narrowing 6,7,[14][15][16] . ...
Article
Full-text available
The current study utilized eye-tracking to investigate the effects of intersensory redundancy and language on infant visual attention and detection of a change in prosody in audiovisual speech. Twelve-month-old monolingual English-learning infants viewed either synchronous (redundant) or asynchronous (non-redundant) presentations of a woman speaking in native or non-native speech. Halfway through each trial, the speaker changed prosody from infant-directed speech (IDS) to adult-directed speech (ADS) or vice versa. Infants focused more on the mouth of the speaker on IDS trials compared to ADS trials regardless of language or intersensory redundancy. Additionally, infants demonstrated greater detection of prosody changes from IDS speech to ADS speech in native speech. Planned comparisons indicated that infants detected prosody changes across a broader range of conditions during redundant stimulus presentations. These findings shed light on the influence of language and prosody on infant attention and highlight the complexity of audiovisual speech processing in infancy.
... Im Verlauf der Pandemie ergaben sich immer wieder Hinweise darauf, dass spezifische Maßnahmen, wie das Tragen von Mund-Nasen-Bedeckungen, zwar einen Schutz vor Infektionen leisten können, jedoch auch eine Barriere in der sozialen Interaktion von Kindern, Fachkräften und Eltern schaffen können. Vor dem Hintergrund, dass das Wahrnehmen der Mimik von Interaktionspartnern schon bei Säuglingen und Kleinkindern bedeutsam für die sensomotorische und soziale Entwicklung ist (Bahrick et al., 2016;Grossman, 2013), wurde zunächst keine generelle Empfehlung zum Tragen von Mund-Nasen-Bedeckungen durch Fachkräfte in Kitas eingeführt. Mit der Verschlechterung der pandemischen Lage und der stärkeren Betroffenheit der Kitas von COVID-19-Infektionsfällen erfolgte schließlich ab dem Herbst 2020 eine (inzidenzabhängige, tendenziell dauerhafte) Maskenpflicht in Kitas (z.B. ...
Article
Full-text available
Die Corona-Pandemie machte es erforderlich, dass Kindertageseinrichtungen ihr Angebot kurzfristig umstellen und unterschiedlichste Schutz- und Hygienemaßnahmen umsetzen mussten. Welche Auswirkungen diese Maßnahmen auf die Interaktionsebenen der pädagogischen Praxis hatten, wird mit Blick auf den Umgang der Fachkräfte mit den Kindern, das Zusammenspiel der Kinder untereinander und auf die Kooperation der Einrichtung mit den Eltern untersucht. Datenbasis bildet eine wiederholte, schriftliche Befragung von 2.529 Kitaleitungen im Zeitraum von Oktober 2020 bis Juni 2021, welche sowohl aktuelle als auch retrospektive Einschätzungen der Leitungskräfte bezüglich der Qualität unterschiedlicher Interaktionsebenen erfragt. Die Ergebnisse zeigen, dass die Einführung spezifischer, coronabedingter Maßnahmen, wie etwa das Distanzgebot, das Tragen von Masken oder ein Betretungsverbot der Kita für Eltern, mit einer signifikanten Verschlechterung der Beurteilung unterschiedlicher Interaktionsebenen einhergehen. Positivere Beurteilungen gingen hingegen mit einer häufigeren, auch nicht persönlichen Kommunikation mit Eltern und Kindern einher. Zudem zeigte sich, dass insbesondere Leitungen von Einrichtungen mit einem hohen Anteil an sozial benachteiligten Kindern von einer Verschlechterung berichten.
... In the first weeks of postnatal life, newborns are able to demonstrate brief periods of attention to visual stimuli but they generally do not demonstrate longer periods of sustained attention due to limited periods of alert wakefulness (Fantz, 1963;Rose et al., 2004;Reynolds et al., 2013). While older infants prefer to look at faces and other complex stimuli, younger infants tend to look at areas high in perceptual salience that stand out from other stimuli in the visual field, regardless of stimulus complexity or social/emotional valence (e.g., Bahrick et al., 2016;Frank et al., 2009;Reynolds, 2015;Reynolds and Roth, 2018). ...
Chapter
The first year of postnatal life is characterized by significant developmental gains in attention. This chapter explores the development of visual attention in infancy with a particular focus on the development of selective and sustained attention. Research examining behavioral and neural correlates of attention, and theoretical models relating the development of neural systems to developmental gains in attention, are reviewed. The development of attention is characterized by a shift from more reflexive orienting driven by low-level stimulus characteristics in the newborn period to increasing volitional control of attention that leads to more effective perceptual processing and learning in later infancy.
... Attention to orofacial gestures enhances speech processing in infants (Burnham & Dodd, 2004;Teinonen, Aslin, Alku, & Csibra, 2008) and in adults (Lansing & McConkie, 1999). This enhanced attention to dynamic gaze, facial and vocal cues emerges early in the first year of life (Bahrick, Todd, Castellanos, & Sorondo, 2016;Farroni, Mansfield, Lai, & Johnson, 2003;Senju & Csibra, 2008). The development of these multisensory processes is highly experience-dependent, as exemplified by perceptual narrowing and tuning to facial and speech patterns most prevalent in the child's native environment (Pascalis et al., 2014). ...
Article
Background: Impaired attention to faces of interactive partners is a marker for autism spectrum disorder (ASD) in early childhood. However, it is unclear whether children with ASD avoid faces or find them less salient and whether the phenomenon is linked with the presence of eye contact or speech. Methods: We investigated the impacts of speech (SP) and direct gaze (DG) on attention to faces in 22-month-old toddlers with ASD (n = 50) and typically developing controls (TD, n = 47) using the Selective Social Attention 2.0 (SSA 2.0) task. The task consisted of four conditions where the presence (+) and absence (-) of DG and SP were systematically manipulated. The severity of autism symptoms, and verbal and nonverbal skills were characterized concurrently with eye tracking at 22.4 (SD = 3.2) months and prospectively at 39.8 (SD = 4.3) months. Results: Toddlers with ASD looked less than TD toddlers at face and mouth regions only when the actress was speaking (direct gaze absence with speech, DG-SP+: d = 0.99, p < .001 for face, d = 0.98, p < .001 for mouth regions; direct gaze present with speech, DG+SP+, d = 1.47, p < .001 for face, d = 1.01, p < .001 for mouth regions). Toddlers with ASD looked less at the eye region only when both gaze and speech cues were present (d = 0.46, p = .03). Salience of the combined DG and SP cues was associated concurrently and prospectively with the severity of autism symptoms, and the association remained significant after controlling for verbal and nonverbal levels. Conclusions: The study links poor attention to faces with limited salience of audiovisual speech and provides no support for the face avoidance hypothesis in the early stages of ASD. These results are consequential for research on early discriminant and predictive biomarkers as well as identification of novel treatment targets.
... An additional question about the input and its potential special characteristics for young infants concerns the multimodal nature of these early persistent dynamic experiences, and particularly the sounds that parents make when their faces are in front of their infant. Some laboratory research suggests that infants younger than 3 months may not adeptly use multi-modal synchronies to recognize faces (Bahrick, Gogate, & Ruiz, 2002;Bahrick, Todd, Castellanos, & Sorondo, 2016), even though they are sensitive to coordination of their caregivers' typical facial expressions and sounds (Izard et al., 1995;Kahana-Kalman & Walker-Andrews, 2001;Walker-Andrews, 1997). New evidence suggests that multimodal -auditory-visual -experiences in early infants may have long term consequences. ...
Article
Full-text available
The regularities in very young infants' visual worlds likely have out-sized effects on the development of the visual system because they comprise the first-in experience that tunes, maintains, and specifies the neural substrate from low-level to higher-level representations and therefore constitute the starting point for all other visual learning. Recent evidence from studies using head cameras suggests that the frequency of faces available in early infant visual environments declines over the first year and a half of life. The primary question for the present paper concerns the temporal structure of face experiences: Is frequency the key exposure dimension distinguishing younger and older infants' face experiences, or is it the duration for which faces remain in view? Our corpus of head-camera images collected as infants went about their daily activities consisted of over a million individually coded frames sampled at 0.2 Hz from 232 hours of infant-perspective scenes, recorded from 51 infants aged 1 month to 15 months. The major finding from this corpus is that very young infants (1 to 3 months) not only have more frequent face experiences but also more temporally persistent ones. The repetitions of the same very few face identities presenting up-close and frontal views are exaggerated in more persistent runs of the same face, and these persistent runs are more frequent for the youngest infants. The implications of early experiences consisting of extended repeated exposures of up-close frontal views for visual learning are discussed.
... Furthermore, the results of these studies imply that research on infant face processing that utilizes static visual stimuli may not generalize well to infant face processing of dynamic faces in multimodal contexts. Bahrick et al. (2016) examined 2-to 8-month-old infants' attention to faces compared to objects under static and dynamic audiovisual and unimodal visual presentation conditions. Interestingly, they found no attentional bias for faces compared to objects for infants at 2 months of age. ...
Article
Full-text available
We present an integrative review of research and theory on major factors involved in the early development of attentional biases to faces. Research utilizing behavioral, eye-tracking, and neuroscience measures with infant participants as well as comparative research with animal subjects are reviewed. We begin with coverage of research demonstrating the presence of an attentional bias for faces shortly after birth, such as newborn infants’ visual preference for face-like over non-face stimuli. The role of experience and the process of perceptual narrowing in face processing are examined as infants begin to demonstrate enhanced behavioral and neural responsiveness to mother over stranger, female over male, own- over other-race, and native over non-native faces. Next, we cover research on developmental change in infants’ neural responsiveness to faces in multimodal contexts, such as audiovisual speech. We also explore the potential influence of arousal and attention on early perceptual preferences for faces. Lastly, the potential influence of the development of attention systems in the brain on social-cognitive processing is discussed. In conclusion, we interpret the findings under the framework of Developmental Systems Theory, emphasizing the combined and distributed influence of several factors, both internal (e.g., arousal, neural development) and external (e.g., early social experience) to the developing child, in the emergence of attentional biases that lead to enhanced responsiveness and processing of faces commonly encountered in the native environment.
... These 127 infants were part of a larger study in which mean performance for a variety of variables was reported, including total time to habituation, the look-away rate during habituation, the mean duration of the first two habituation trials, mean looking time over test trials, and visual recovery scores [2,31]. The durations of infants' individual looks and the durations of total looking times within individual trials-the focus of the current study-have not been published. ...
Article
Full-text available
Although looking time is used to assess infant perceptual and cognitive processing, little is known about the temporal structure of infant looking. To shed light on this temporal structure, 127 three-month-olds were assessed in an infant-controlled habituation procedure and presented with a pre-recorded display of a woman addressing the infant using infant-directed speech. Previous individual look durations positively predicted subsequent look durations over a six look window, suggesting a temporal dependency between successive infant looks. The previous look duration continued to predict the subsequent look duration after accounting for habituation-linked declines in look duration, and when looks were separated by an inter-trial interval in which no stimulus was displayed. Individual differences in temporal dependency, the strength of associations between consecutive look durations, are distinct from individual differences in mean infant look duration. Nevertheless, infants with stronger temporal dependency had briefer mean look durations, a potential index of stimulus processing. Temporal dependency was evident not only between individual infant looks but between the durations of successive habituation trials (total looking within a trial). Finally, temporal dependency was evident in associations between the last look at the habituation stimulus and the first look at a novel test stimulus. Thus temporal dependency was evident across multiple timescales (individual looks and trials comprised of multiple individual looks) and persisted across conditions including brief periods of no stimulus presentation and changes from a familiar to novel stimulus. Associations between consecutive look durations over multiple timescales and stimuli suggest a temporal structure of infant attention that has been largely ignored in previous work on infant looking.
Article
Full-text available
This study examined European American and Hispanic American mothers' multimodal communication to their infants (N= 24). The infants were from three age groups representing three levels of lexical-mapping development: prelexical (5 to 8 months), early-lexical (9 to 17 months), and advanced-lexical (21 to 30 months). Mothers taught their infants four target (novel) words by using distinct objects during a semistructured play episode. Recent research suggests that young infants rely on temporal synchrony to learn syllable–object relations, but later, the role of synchrony diminishes. Thus, mothers' target and nontarget naming were coded for synchrony and other communication styles. The results indicated that mothers used target words more often than nontarget words in synchrony with object motion and sometimes touch. Thus, ‘multimodal motherese’ likely highlights target word-referent relations for infants. Further, mothers tailored their communication to infants' level of lexical-mapping development. Mothers of prelexical infants used target words in synchrony with object motion more often than mothers of early- and advanced-lexical infants. Mothers' decreasing use of synchrony across age parallels infants' decreasing reliance on synchrony, suggesting a dynamical and reciprocal environment–organismic relation.
Article
Full-text available
Olfactory responsiveness was assessed in 24 neonates born to mothers who had or had not consumed anise flavour during pregnancy. Both groups of infants were followed-up for behavioural markers of attraction and aversion when exposed to anise odour and a control odour immediately after birth and on day 4. Infants born to anise-consuming mothers evinced a stable preference for anise odour over this period, whereas those born to anise non-consuming mothers displayed aversion or neutral responses. This study provides the first clear evidence that through their diet human mothers influence the hedonic polarity of their neonates' initial olfactory responses. The findings have potential implications for the early mother-to-infant transmission of chemosensory information relative to food and addictive products.
Article
Full-text available
Selective attention is the gateway to perceptual processing, learning, and memory, and is a skill honed through extensive experience. However, little research has focused on how selective attention develops. Here we synthesize established and new findings assessing the central role of redundancy across the senses in guiding and constraining this process in infancy and early childhood. We highlight research demonstrating the dual role of intersensory redundancy -- its facilitating and interfering effects-- on detection and perceptual processing of various properties of objects and events.
Article
Full-text available
There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. © 2014 The Authors. Dev Psychobiol Published by Wiley Periodicals, Inc.
Article
Full-text available
Acoustical changes in the prosody of mothers’ speech to infants are distinct and near universal. However, less is known about the visible properties of mothers’ infant-directed (ID) speech, and their relation to speech acoustics. Mothers’ head movements were tracked as they interacted with their infants using ID speech, and compared to movements accompanying their adult-directed (AD) speech. Movement measures along three dimensions of head translation, and three axes of head rotation were calculated. Overall, more head movement was found for ID than AD speech, suggesting that mothers exaggerate their visual prosody in a manner analogous to the acoustical exaggerations in their speech. Regression analyses examined the relation between changing head position and changing acoustical pitch ( F0 ) over time. Head movements and voice pitch were more strongly related in ID speech than in AD speech. When these relations were examined across time windows of different durations, stronger relations were observed for shorter time windows (< 5 sec). However, the particular form of these more local relations did not extend or generalize to longer time windows. This suggests that the multimodal correspondences in speech prosody are variable in form, and occur within limited time spans.
Article
Full-text available
Three aspects of the development of visual orienting in infants of 2, 3, and 4 months of age are examined in this paper. These are the age of onset and sequence of development of (1) the ability to readily disengage gaze from a stimulus, (2) the ability to consistently show "anticipatory" eye movements, and (3) the ability to use a central cue to predict the spatial location of a target. Results indicated that only the 4--month-old group was easily able to disengage from an attractive central stimulus to orient toward a simultaneously presented target. The 4--month-old group also showed more than double the percentage of "anticipatory" looks than did the other age groups. Finally, only the 4--month-old group showed significant evidence of being able to acquire the contingent relationship between a central cue and the spatial location (to the right or to the left) of a target. Measures of anticipatory looking and contingency learning were not correlated. These findings are, in general terms, consistent with the predictions of matura-tional accounts of the development of visual orienting.
Article
Full-text available
This chapter presents and extends a model of human socio-emotional development that we have recently proposed (Gergely & Watson, 1996). Our model gains its uniqueness by the fact that even though it embraces the common assumption that young infants are sensitive to contingency experience, at the same time, it rejects the general view that they are initially perceptually aware of their specific basic emotion states. Indeed, it is our contention that contingency detection is crucially involved in an infant's progressively developing awareness of his or her internal affective states. More specifically, our "social-biofeedback model" holds that the caregiver's contingent reflections of the infant's emotion expressive dis- plays play a central causal role in the development of emotional self-aware- ness and control that is mediated by the contingency detection module. We begin by trying to make very clear what we mean by contingency perception, contingency seeking, and its special limitation to perceivable contin- gencies, because we shall place considerable theoretical weight on these foundational constructs. We then consider the implications of this view of early contingency perception when conjoined with an assumption that an infant begins life with little or no awareness of his or her dispositional states. That leads us to our social-biofeedback model of how the infant progressively becomes aware of his or her emotional dispositions through the process previously identified as social mirroring. Our model includes an assumption about a change in the target magnitude of contingency seeking that appears to occur at about 3 months of age. The possible relevance of this for the understanding of the deviant developmental pat- tern in autism is also briefly considered.
Article
Full-text available
INTRODUCTION: Psychophysiology is the study of the relation between psychological events and biological processes in human participants. The electrocardiogram (ECG) and heart rate (HR) have been commonly used measures throughout the history of psychophysiological research. Early studies found that stimuli eliciting differing emotional responses in adults also elicited HR responses differing in magnitude and direction of change from baseline (e.g., Darrow, 1929; Graham & Clifton, 1966; Lacey, 1959). Vast improvements in methods of measuring ECG and knowledge regarding the relation between HR and cognitive activity have occurred. Heart rate has been particularly useful in developmental psychophysiological research. Researchers interested in early cognitive and perceptual development have utilized HR as a window into cognitive activity for infants before they are capable of demonstrating complex behaviors or providing verbal responses. Also, the relation between brain control of HR and the behavior of HR during psychological activity has helped work in developmental cognitive neuroscience. In this chapter, we address the use of the ECG and HR in research on infants. We review three ways in which HR has been used in psychophysiological research: HR changes, attention phases defined by HR, and HR variability (particularly respiratory sinus arrhythmia). Topics we focus on are the areas of the brain that are indexed with these measures, developmental changes associated with these measures, and the relation of these measures to psychological processes. Before covering research with infants, we briefly review background information on the heart, the ECG and HR, and its relation to psychophysiology.
Chapter
Full-text available
The early development of attentional selectivity is thought to be strongly influenced by the infant's sensitivity to salient properties of stimulation such as contrast, movement, intensity, and intersensory redundancy (overlapping information across auditory, visual, tactile and/or proprioceptive stimulation for properties of objects and events). In this chapter, the powerful role of intersensory redundancy in guiding and shaping early selective attention, and, in turn, perception and learning is explored. The recent empirical and theoretical efforts to better understand what guides the allocation of selective attention during early development are reviewed and the implications of early selective attention for perceptual, cognitive, and social development are briefly discussed.
Book
Full-text available
Since Scaife and Bruner’s (1975) seminal report that many 8- to 10-month-olds will follow an experimenter’s change in eye-gaze, the field of developmental psychology has come a long way in understanding the development and significance of this behavior. From my perspective, it was the publication of Moore and Dunham’s (1995) edited volume twenty-years later that placed the topic of following another’s direction of gaze into the forefront of our field. Prior to Moore and Dunham’s volume research in this area was rather sparse and was being conducted in a handful of labs. It was in this volume that many of us, including myself as a then first-year graduate student, came to recognize and understand something of the interconnected nature of joint visual attention and the rest of development. We learned of the association between autism and infants’ inability to follow another’s line of visual regard, we began to see the relation of joint attention and early word learning and theory of mind, and we saw this behavior as another way to examine the nature of early social communication and interaction. In the nearly 12-years since the publication of Moore and Dunham we have continued to make substantial gains in understanding the development of joint attention. We have examined the relation of initiating joint attention and responding to joint attention and autism, the role of the superior temporal sulcus and other interconnected areas of the brain associated with face processing, the following of another’s eye-gaze as it informs of us of attention and the flexibility of attention. We have begun to examine the connection between gaze following and children’s susceptibility to deception, non-human primates’ proclivity for following another’s direction of gaze, and recently the significance of gaze in face recognition and social exchanges between adults. Along with these more recent areas of inquiry we have continued to examine the links of joint visual attention and early language, theory of mind, and perceptual development. The current volume represents all of these areas, both the “old” and the “new”. In preparing this volume it became clear that with each new and exciting result there came new and unexplored questions - thus we still have a long way to go! Taken together, all of the chapters in this volume highlight what I believe to be two important points. First, if we are to draw nearer to understanding human development then we must study it in situ. Certainly we need to study the “individual” but we must also study what occurs between and even among people. I believe it is what occurs within the context of these social exchanges and relationships that so frequently involve looking where another is looking that holds many keys to our understanding of social, cognitive, perceptual and neurological development. Second joint visual attention/gaze-following is not merely a developmental precursor; rather these behaviors, their relations to other developmental achievements, are all part of the dynamic system that is Development. My appreciation goes to each author for their contribution and patience with this volume. I express my gratitude to Lori Handleman at Lawrence Erlbaum Associates for her sense of humor and assistance with the editorial process, Steve Chisholm at MidAtlantic books in preparing the book for production, Sebastián Picker for the cover art, and to Kang Lee and Darwin Muir who provided assistance when needed. Enjoy!
Article
Full-text available
Although research has demonstrated impressive face perception skills of young infants, little attention has focused on conditions that enhance versus impair infant face perception. The present studies tested the prediction, generated from the intersensory redundancy hypothesis (IRH), that face discrimination, which relies on detection of visual featural information, would be impaired in the context of intersensory redundancy provided by audiovisual speech and enhanced in the absence of intersensory redundancy (unimodal visual and asynchronous audiovisual speech) in early development. Later in development, following improvements in attention, faces should be discriminated in both redundant audiovisual and nonredundant stimulation. Results supported these predictions. Two-month-old infants discriminated a novel face in unimodal visual and asynchronous audiovisual speech but not in synchronous audiovisual speech. By 3 months, face discrimination was evident even during synchronous audiovisual speech. These findings indicate that infant face perception is enhanced and emerges developmentally earlier following unimodal visual than synchronous audiovisual exposure and that intersensory redundancy generated by naturalistic audiovisual speech can interfere with face processing. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Article
Full-text available
The development of infants' ability to detect the arbitrary relation between the color/shape of an object and the pitch of its impact sound was investigated using an infant-control habituation procedure. Ninety-six infants of 3, 5, or 7 months were habituated to films of two objects differing in color and shape, striking a surface in an erratic pattern. One object produced an impact sound of a high pitch, and the other object produced a low pitch. During test trials, infants in the experimental conditions received a change in the pairing of pitch with color/shape, whereas controls received no change. Results indicate that visual recovery to the change in pitch-color/shape relations was significantly greater than that of age-matched controls at 7 months, but not at 3 or 5 months. A prior study demonstrated that by 3 months, infants were able to discriminate the color/shape and pitch changes of these events. However, it is not until 7 months that they show evidence of detecting the arbitrary relation between these attributes.
Article
Full-text available
Bahrick & Lickliter (2000) proposed an "intersensory redundancy hypothesis" which holds that, in early development, experiencing an event redundantly across two senses facilitates perception of amodal properties (e.g., synchrony, tempo, rhythm) whereas experiencing an event in one sense modality alone facilitates perception of modality specific aspects of stimulation (e.g., pitch, timbre, color, pattern, configuration). Therefore, discrimination of individual voices (modality specific information) should be enhanced when the voices are presented unimodally, in the absence of intersensory redundancy, and attenuated when they are presented bimodally, in the presence of intersensory redundancy (where amodal properties are attended). Thirty-two 3-month-old infants were habituated to the voice of a woman speaking in the context of intersensory redundancy (along with the synchronously moving face) or no redundancy (with a static face). Test trials played the voice of a novel woman speaking. Results supported our prediction and demonstrated significant discrimination (measured by visual recovery to the novel voice) in the nonredundant, but not the redundant voice condition. These findings converge with those of our prior studies and demonstrate that in early development, infants attend to different properties of events as a function of whether the stimulation is multimodal or unimodal. Introduction Most early learning occurs in the context of close face-to-face interactions. Although research demonstrates that young infants are excellent perceivers of faces and voices, we know very little about their perception of naturalistic, dynamic, multimodal person displays. For example, we do not know under what conditions infants attend to information available in the face alone, the voice alone (modality specific information), or to information available in the face and voice together (amodal information). Bahrick and Lickliter (2000, 2002) provided evidence for an "intersensory redundancy hypothesis" which makes predictions about infant perception of amodal and modality specific properties in the context of multimodal versus unimodal stimulation. According to the hypothesis (see Figure 1), in early development, 1) information experienced redundantly across two sensory modalities selectively recruits attention to amodal properties of events (e.g., synchrony, tempo, rhythm) at the expense of modality specific properties, whereas 2) information experienced in one sense modality alone selectively recruits attention to modality-specific aspects of the event (e.g., color, pattern, orientation, pitch, timbre) and facilitates perceptual learning of these properties at the expense of others. Therefore, infants should attend and perceive modality specific properties, such as pitch and timbre of a voice, better when it is experienced unimodally (no intersensory redundancy) than when it is experienced bimodally with the synchronously moving face (intersensory redundancy). In contrast, in bimodal face-voice displays infant attention should be recruited to amodal properties that are redundantly presented (such as rhythm, tempo, and synchrony), at the expense of modality specific properties. The present study tested these predictions by asking whether 3-month-old infants could differentiate between two unfamiliar women's voices better when the voice was experienced with or without the synchronously moving face. Method Thirty-two infants were habituated, in an infant control procedure, to face-voice displays of one of two women speaking a nursery rhyme in the presence versus absence of intersensory redundancy (see Figure 2). Intersensory redundancy was provided by accompanying the speaking voice with the natural, synchronously moving face of the woman speaking (bimodal, redundant condition, N=16). Intersensory redundancy was eliminated by presenting the speaking voice along with a static image of the face of the woman (nonredundant condition, N=16). Following habituation, infants received test trials with the voice of the novel woman speaking the same nursery rhyme, under their respective condition (with no change in the identity of the familiar face). Visual recovery to the change in voice served as the measure of discrimination. It was expected that infants would show visual recovery to the change in voice when it was experienced nonredundantly. However, when the change in voice was experienced bimodally and redundantly with the synchronously moving face, no significant visual recovery should be observed because greater attention would be directed to amodal properties of stimulation.
Chapter
Full-text available
(From the chapter) Given that a true understanding of the nature of cognitive function can only be derived from an understanding of its underlying mechanisms, an overarching theme of our research program has been to focus on the processes of attention in infancy, and the relationship of those processes to more general cognitive products (e.g., perception, learning, recognition, discrimination, and categorization). In doing this, we have also adopted Underwood's (1975) framework for the role of individual differences in theory development, which has in turn led us to investigate both developmental and individual differences in attention during infancy and early childhood. This work has taken several forms--for example, longitudinal studies examining the predictive validity of attentional constructs for later cognitive function, experimental studies of perceptual-attentional effects, the interface between attention and higher-order cognitive functions, and the role of arousal in early attention. Our research strategies and assumptions are perhaps best explicated in a description of our program of work on identifying those subtypes of attention that contribute to or influence common measures of visual cognition across the first year. In the sections that follow, we briefly review those assumptions and strategies in the conduct of this program. We end with a discussion of what we believe are promising paths for discovery in the future. (PsycINFO Database Record (c) 2008 APA )
Article
Full-text available
Preterm infants experience frequent cardiorespiratory events (CREs) including multiple episodes of apnea and bradycardia per day. This physiological instability is due to their immature autonomic nervous system and limited capacity for self-regulation. This study examined whether systematic exposure to maternal sounds can reduce the frequency of CREs in NICU infants. Fourteen preterm infants (26-32 weeks gestation) served as their own controls as we measured the frequency of adverse CREs during exposure to either Maternal Sound Stimulation (MSS) or Routine Hospital Sounds (RHS). MSS consisted of maternal voice and heartbeat sounds recorded individually for each infant. MSS was provided four times per 24-h period via a micro audio system installed in the infant's bed. Frequency of adverse CREs was determined based on monitor data and bedside documentation. There was an overall decreasing trend in CREs with age. Lower frequency of CREs was observed during exposure to MSS versus RHS. This effect was significantly evident in infants ≥ 33 weeks gestation (p=0.03), suggesting an effective therapeutic window for MSS when the infant's auditory brain development is most intact. This study provides preliminary evidence for short-term improvements in the physiological stability of NICU infants using MSS. Future studies are needed to investigate the potential of this non-pharmacological approach and its clinical relevance to the treatment of apnea of prematurity.
Article
Full-text available
The current study examined the relations between individual differences in sustained attention in infancy, the temperamental trait behavioral inhibition in childhood, and social behavior in adolescence. The authors assessed 9-month-old infants using an interrupted-stimulus attention paradigm. Behavioral inhibition was subsequently assessed in the laboratory at 14 months, 24 months, 4 years, and 7 years. At age 14 years, adolescents acted out social scenarios in the presence of an unfamiliar peer as observers rated levels of social discomfort. Relative to infants with high levels of sustained attention, infants with low levels of sustained attention showed increasing behavioral inhibition throughout early childhood. Sustained attention also moderated the relation between childhood behavioral inhibition and adolescent social discomfort, such that initial levels of inhibition at 14 months predicted later adolescent social difficulties only for participants with low levels of sustained attention in infancy. These findings suggest that early individual differences in attention shape how children respond to their social environments, potentially via attention's gate-keeping role in framing a child's environment for processing.
Article
Full-text available
In this study, we had 3 major goals. The 1st goal was to establish a link between behavioral and event-related potential (ERP) measures of infant attention and recognition memory. To assess the distribution of infant visual preferences throughout ERP testing, we designed a new experimental procedure that embeds a behavioral measure (paired comparison trials) in the modified-oddball ERP procedure. The 2nd goal was to measure infant ERPs during the paired comparison trials. Independent component analysis (ICA) was used to identify and to remove eye-movement components from the electroencephalographic data, thus allowing for the analysis of ERP components during paired comparison trials. The 3rd goal was to localize the cortical sources of infant visual preferences. Equivalent current dipole analysis was performed on the ICA components related to experimental events. Infants who demonstrated novelty preferences in paired comparison trials demonstrated greater amplitude Negative central ERP components across tasks than infants who did not demonstrate novelty preferences. Visual preference also interacted with attention and stimulus type. The cortical sources of infant visual preferences were localized to inferior and superior prefrontal cortex and to the anterior cingulate cortex.
Article
Several theories have stressed the importance of intersensory integration for development but have not identified specific underlying integration mechanisms. The author reviews and synthesizes current knowledge about the development of intersensory temporal perception and offers a theoretical model based on epigenetic systems theory, proposing that responsiveness to 4 basic features of multimodal temporal experience-temporal synchrony, duration, temporal rate, and rhythm-emerges in a sequential, hierarchical fashion. The model postulates that initial developmental limitations make intersensory synchrony the basis for the integration of intersensory temporal relations and that the emergence of responsiveness to the other, increasingly more complex, temporal relations occurs in a hierarchical, sequential fashion by building on the previously acquired intersensory temporal processing skills.
Book
This book provides both a review of the literature and a theoretical framework for understanding the development of visual attention from infancy through early childhood, including the development of selective and state-related aspects in infants and young children as well as the emergence of higher controls on attention. They explore individual differences in attention and possible origins of ADHD.
Article
Research has demonstrated that intersensory redundancy (stimulation synchronized across multiple senses) is highly salient and facilitates processing of amodal properties in multimodal events, bootstrapping early perceptual development. The present study is the first to extend this central principle of the intersensory redundancy hypothesis (IRH) to certain types of intrasensory redundancy (stimulation synchronized within a single sense). Infants were habituated to videos of a toy hammer tapping silently (unimodal control), depicting intersensory redundancy (synchronized with a soundtrack) or intrasensory redundancy (synchronized with another visual event; light flashing or bat tapping). In Experiment 1, 2-month-olds showed both intersensory and intrasensory facilitation (with respect to the unimodal control) for detecting a change in tempo. However, intrasensory facilitation was found when the hammer was synchronized with the light flashing (different motion) but not with the bat tapping (same motion). Experiment 2 tested 3-month-olds using a somewhat easier tempo contrast. Results supported a similarity hypothesis: intrasensory redundancy between two dissimilar events was more effective than that between two similar events for promoting processing of amodal properties. These findings extend the IRH and indicate that in addition to intersensory redundancy, intrasensory redundancy between two synchronized dissimilar visual events is also effective in promoting perceptual processing of amodal event properties.
Poster
Research conducted on infant directed speech has primarily focused on infants’ perception of unimodal auditory speech, even though natural speech is typically multimodal. According to the intersensory redundancy hypothesis (IRH), information presented redundantly and synchronously across two or more senses recruits attention and facilitates perceptual learning of amodal properties more successfully than information that is presented to only one sense (Bahrick & Lickliter, 2000, 2002). Multimodal presentations provide redundant information for amodal properties such as prosody, intonation, intensity, tempo and rhythm available in audiovisual speech which are important for conveying meaning. The present study tested predictions of the IRH and assessed whether 4 1/2-month-old infants could perceive a change in meaning from prohibition to approval or vice versa when unimodal auditory speech passages were presented (no redundancy) versus bimodal audiovisual speech passages (redundancy provided by the speaker’s moving face). Consistent with predictions, results of an habituation procedure demonstrated that under the bimodal audiovisual speech condition, but not the unimodal auditory speech condition, infants detected a change in meaning across different speech passages. These findings demonstrate that at 4 1/2 months, infants are able to discriminate changes in prosody that convey prohibition and approval in the presence of intersensory redundancy provided by audiovisual speech, but not in auditory speech where there is no redundancy. Further, at 4 ½ months, infants are able to generalize across different speech passages on the basis of prosody conveying approval or prohibition. These findings support predictions of the IRH and suggest that detection of meaning conveyed by prosody first emerges by detecting intersensory redundancy in audiovisual speech.
Article
Research has demonstrated that intersensory redundancy such as audiovisual temporal synchrony guides and constrains early selective attention (Bahrick & Lickliter, 2000; 2002; Bahrick, Walker, & Neisser, 1981). Audiovisual temporal synchrony can guide auditory selective attention even in the context of competing background noise. Research indicates that 7-month-olds can attend to a female voice synchronized with a face while ignoring a nonsynchronous male voice of the same amplitude (Hollich, Newman, & Jusczyk, 2005). The present study assessed auditory selective attention in younger infants under more difficult conditions: two identical nursery rhymes were spoken concurrently, at the same amplitude, and out of phase with one another, by two females. Results suggest that infants selectively attended to the voice synchronized with the face during habituation and ignored the concurrent asynchronous distractor voice, even under these difficult conditions.
Article
Method Forty-seven 3-month-old infants (M = 90.04 days, SD = 3.98) were habituated, in an infant controlled procedure, to a video of one of three women speaking a nursery rhyme (see Figure 1). Twenty-three infants received a bimodal audiovisual display (synchronous speech) and twenty-four infants received a unimodal visual display (silent speech) during the habituation procedure. Following habituation (a decrease in looking time of 50%), memory was assessed after a 15-minute delay. The memory test consisted of 8-20 second trials of the familiar woman's face speaking silently paired side by side with a novel woman's face speaking silently. The lateral positions were counterbalanced across two blocks of four trials. Proportion of total looking time (PTLT) to the novel woman's face was the dependent measure. Results Results (depicted in Figure 2) support our predictions and demonstrate that infants who were habituated to the unimodal visual display showed a significant PTLT to the novel face according to a single sample t-test against the chance value of .50 (t(23) = 2.64, p < .05). In contrast, infants who were habituated to the bimodal audiovisual display showed no preference for either face (t(22) = -0.50, p > .05). Further, those in the unimodal visual condition showed a significantly greater PTLT to the novel face than infants in the bimodal audiovisual condition (t(45) = 2.10, p < .05).
Article
The goal of this study was to examine developmental change in visual attention to dynamic visual and audiovisual stimuli in 3‐, 6‐, and 9‐month‐old infants. Infant look duration was measured during exposure to dynamic geometric patterns and Sesame Street video clips under three different stimulus modality conditions: unimodal visual, synchronous audiovisual, and asynchronous audiovisual. Infants looked longer toward Sesame Street stimuli than geometric patterns, and infants also looked longer during multimodal audiovisual (synchronous and asynchronous) presentations than during unimodal visual presentations. There was a three‐way interaction of age, stimulus type, and stimulus modality. Significant differences were found within and between age groups related to stimulus modality (visual or audiovisual) while viewing Sesame Street clips. No significant interaction was found between age and stimulus type while infants viewed dynamic geometric patterns. These findings indicate that patterns of developmental change in infant attention vary based on stimulus complexity and modality of presentation.
Article
Five- and 3-month-old infants' perception of infant-directed (ID) faces and the role of speech in perceiving faces were examined. Infants' eye movements were recorded as they viewed a series of two side-by-side talking faces, one infant-directed and one adult-directed (AD), while listening to ID speech, AD speech, or in silence. Infants showed consistently greater dwell time on ID faces vs. AD faces, and this ID face preference was consistent across all three sound conditions. ID speech resulted in higher looking overall, but it did not increase looking at the ID face per se. Together, these findings demonstrate that infants' preferences for ID speech extend to ID faces.
Article
Synchrony—a construct used across multiple fields to denote the temporal relationship between events—has been applied to the study of mother–infant interaction and is suggested here as a framework for the study of interpersonal relationships. Defined as the temporal coordination of micro-level social behavior, parent–infant synchrony is charted in its development across infancy from the initial consolidation of biological rhythms during pregnancy to the emergence of symbolic exchange between parent and child. Synchrony is shown to depend on physiological mechanisms supporting bond formation in mammals—particularly physiological oscillators and neuroendocrine systems such as those involving the hormone oxytocin. Developmental outcomes of the synchrony experience are observed in the domains of self-regulation, symbol use, and the capacity for empathy across childhood and adolescence. Specific disruptions to the parameters of synchrony that may be observed in various pathological conditions, such as prematurity or maternal affective disorder, are detailed. A time-based, micro-analytic behavioral approach to the study of human relationship may offer new insights on intersubjectivity across the lifespan.
Article
Infant visual attention has been studied extensively within cognitive paradigms using measures such as look duration and reaction time, but less work has examined how infant attention operates in social contexts. In addition, little is known about the stability of individual differences in attention across cognitive and social contexts. In this study, a cross-sectional sample of 50 infants (4 and 6 months of age) were first tested in a look duration and reaction time task with static visual stimuli. Next, their mothers participated with the infants in the still-face procedure, a mildly distressing social interaction paradigm that involves violation of expectancy. Individual differences in looking and emotion were stable across the phases of the still-face task. Further, individual differences in looking measures from the visual attention task were related to the pattern of looking shown across the phases of the still-face procedure. Results indicate that individual differences in attentional measures show moderate stability within cognitive and social contexts, and that the ability of infants to shift and disengage looks may affect their ability to regulate interaction in social contexts.
Article
Four experiments are described which investigated the role of the mother's voice in facilitating recognition of the mother's face at birth. Experiment 1 replicated our previous findings (Br. J. Dev. Psychol. 1989; 7: 3-15; The origins of human face perception by very young infants. Ph.D. Thesis, University of Glasgow, Scot- land, UK, 1990) indicating a preference for the mother's face when a control for the mother's voice and odours was used only during the testing. A second experiment adopted the same procedures, but controlled for the mother's voice from birth through testing. The neonates were at no time exposed to their mother's voice. Under these conditions, no preference was found. Further, neonates showed only few head turns towards both the mother and the stranger during the testing. Experiment 3 looked at the number of head turns under conditions where the newborn infants were exposed to both the mother's voice and face from birth to 5 to 15min prior to testing. Again, a strong preference for the mother's face was demonstrated. Such preference, however, vanished in Experiment 4, when neonates had no previous exposure to the mother's voice-face combination. The conclusion drawn is that a prior experience with both the mother's voice and face is necessary for the development of face recognition, and that intermodal perception is evident at birth. The neonates' ability to recognize the face of the mother is most likely to be rooted in prenatal learning of the mother's voice. Copyright # 2004 John Wiley & Sons, Ltd.
Article
Two experiments assessing event-related potentials in 5-month-old infants were conducted to examine neural correlates of attentional salience and efficiency of processing of a visual event (woman speaking) paired with redundant (synchronous) speech, nonredundant (asynchronous) speech, or no speech. In Experiment 1, the Nc component associated with attentional salience was greater in amplitude following synchronous audiovisual as compared with asynchronous audiovisual and unimodal visual presentations. A block design was utilized in Experiment 2 to examine efficiency of processing of a visual event. Only infants exposed to synchronous audiovisual speech demonstrated a significant reduction in amplitude of the late slow wave associated with successful stimulus processing and recognition memory from early to late blocks of trials. These findings indicate that events that provide intersensory redundancy are associated with enhanced neural responsiveness indicative of greater attentional salience and more efficient stimulus processing as compared with the same events when they provide no intersensory redundancy in 5-month-old infants. © 2013 Wiley Periodicals, Inc. Dev Psychobiol.
Article
Current knowledge of the perceptual and cognitive abilities in infancy is largely based on the visual habituation-dishabituation method. According to the comparator model [e.g., Sokolov (1963a) Perception and the conditioned reflex. Oxford: Pergamon Press], habituation refers to stimulus encoding and dishabituation refers to discriminatory memory performance. The review also describes the dual-process theory and the attention disengagement approach. The dual-process theory points to the impact of natural stimulus preferences on habituation-dishabituation processes. The attention disengagement approach emphasizes the contribution of the ability to shift the attention away from a stimulus. Moreover, arguments for the cognitive interpretation of visual habituation and dishabituations are discussed. These arguments are provided by physiological studies and by research on interindividual differences. Overall, the review shows that current research supports the comparator model. It emphasizes that the investigation of habituation and dishabituation expands our understanding of visual attention processes in infants. © 2012 Wiley Periodicals, Inc. Dev Psychobiol.
Article
Many studies have demonstrated that newborns prefer upright faces over upside-down faces. Based on this evidence, some have suggested that faces represent a special class of stimuli for newborns and there is a qualitative difference between the processes involved in perception of facelike and non-facelike patterns (i.e. structural hypothesis). Others suggest that there is no reason to suppose that faces are different from other patterns, because faces, like any other class of visual stimuli, are subject to filtering by the properties of the visual system (i.e. sensory hypothesis). The core question that will be addressed in the present paper is whether, to manifest itself, face preference requires the unique structure of the face, represented by the relative spatial location of its internal features, or rather some more general properties that other stimuli may also possess. Evidence will be presented supporting the idea that newborns do not respond to facelike stimuli by ‘facedness’ but, rather, by some general structural characteristics that best satisfy the constraints of the immature visual system. Copyright © 2001 John Wiley & Sons, Ltd.
Article
The characteristics of scanning patterns between the ages of 6 and 26 weeks were in-vestigated through repeated assessments of 10 infants. Eye movements were re-corded using a corneal-reflection system while the infants looked at 2 dynamic stim-uli: the naturally moving face of their mother and an abstract stimulus. Results indicated that the way infants scanned these stimuli stabilized only after 18 weeks, which is slightly later than the ages reported in the literature on infants' scanning of static stimuli. This effect was especially prominent for the abstract stimulus. From the 14-week session on, infants adapted their scanning behavior to the stimulus char-acteristics. When scanning the video of their mother's face, infants directed their gaze at the mouth and eye region most often. Even at the youngest age, there was no indication of an edge effect. When infants are born, their motor skills are very limited, and the fact that they have very little control over their limbs restricts the way in which they are able to explore the world around them. The oculomotor system—unlike other motor sys-tems—approximates its mature state several months after birth. The infant exer-cises this system every day from birth on. This makes vision one of the most im-portant channels through which babies learn about the world surrounding them. However, during the first months of life, eye movements and visual acuity are also subject to certain constraints. For example, during the first month, eye movements INFANCY, 6(2), 231–255 Copyright © 2004, Lawrence Erlbaum Associates, Inc.
Chapter
IntroductionSelective Attention: The Underappreciated Foundation for Perception, Learning and Memory in a Dynamic, Multimodal EnvironmentIntermodal Perception: Definitions, Issues, and QuestionsThe Intersensory Redundancy Hypothesis (IRH)The Role of Intersensory Redundancy in Social Development: Perception of Faces, Voices, Speech, and EmotionLessons from Atypical DevelopmentConclusions and Future Directions: Toward a More Integrated, Ecologically Relevant Model of Perceptual DevelopmentAcknowledgmentsReferences
Article
How do infants begin to understand spoken words? Recent research suggests that word comprehension develops from the early detection of intersensory relations between conventionally paired auditory speech patterns (words) and visible objects or actions. More importantly, in keeping with dynamic systems principles, the findings suggest that word comprehension develops from a dynamic and complementary relationship between the organism (the infant) and the environment (language addressed to the infant). In addition, parallel findings from speech and non-speech studies of intersensory perception provide evidence for domain general processes in the development of word comprehension. These research findings contrast with the view that a lexical acquisition device with specific lexical principles and innate constraints is required for early word comprehension. Furthermore, they suggest that learning of word–object relations is not merely an associative process. The data support an alternative view of the developmental process that emphasizes the dynamic and reciprocal interactions between general intersensory perception, selective attention and learning in infants, and the specific characteristics of maternal communication.
Article
Children with autism were compared to developmentally matched children with Down syndrome or typical development in terms of their ability to visually orient to two social stimuli (name called, hands clapping) and two nonsocial stimuli (rattle, musical jack-in-the-box), and in terms of their ability to share attention (following another's gaze or point). It was found that, compared to children with Down syndrome or typical development, children with autism more frequently failed to orient to all stimuli, and that this failure was much more extreme for social stimuli. Children with autism who oriented to social stimuli took longer to do so compared to the other two groups of children. Children with autism also exhibited impairments in shared attention. Moreover, for both children with autism and Down syndrome, correlational analyses revealed a relation between shared attention performance and the ability to orient to social stimuli, but no relation between shared attention performance and the ability to orient to nonsocial stimuli. Results suggest that social orienting impairments may contribute to difficulties in shared attention found in autism.
Article
This study examined 4- and 6-month-olds' responses to static or dynamic stimuli using behavioral and heart-rate-defined measures of attention. Infants looked longest to dynamic stimuli with an audio track and least to a static stimulus that was mute. Overall, look duration declined with age to the different stimuli. The amount of time spent in sustained attention at 4 months, but not at 6 months, was related to stimulus discrimination. These results indicate that the decline in look duration typically observed during the middle of the 1st year for static stimuli does generalize to dynamic stimuli. The results further suggest that the amount of time spent in sustained attention during habituation may be an important indicator of processing in younger infants. (PsycINFO Database Record (c) 2008 APA ) (journal abstract)
Article
A longitudinal sample of 226 infants were tested monthly on habituation and novelty preference tasks, augmented with simultaneous heart rate recording from 3 to 9 months of age. Infants were then administered the Bayley Scales of Infant Development II (BSID) and MacArthur Communicative Development Inventory (MCDI) at 12, 18, and 24 months. Prior findings regarding the decline in look duration with age were replicated. Age-based factors were extracted from the monthly assessments, an early attention factor from 3 to 6 months and a late attention factor from 7 to 9 months. A novelty preference factor, which grouped recognition performance at 4 and 6 months of age, was also derived. Two clusters of infants were derived based on the developmental course of change from the early attention to late attention look duration aggregates: One cluster (n=150) decreased strongly, and another (n=50) increased. This finding was bolstered by subsequent analyses of data from infants who completed all tests run from 3 to 9 months. The results of this study suggest that the developmental course of attention during infancy is an important clue to cognitive and language outcomes in early childhood. (PsycINFO Database Record (c) 2008 APA ) (journal abstract)
Article
In this review we examine empirical and theoretical work in three eras—infancy, toddlerhood, and early childhood—and for each era describe the structure of dyadic synchrony in interactions involving children and their caregivers, as well as offer speculation about its developmental function for the child. We review divergent literatures dealing with synchrony-related constructs which, together, suggest that although the structure and function of synchrony change throughout the course of early development, the ability to achieve synchrony may represent a crucial developmental achievement for significant dyadic relationships, one that facilitates social, emotional, and cognitive growth for the child.
Article
This study investigates early executive attention in infancy by studying the relations between infant sequential looking and other behaviors predictive of later self-regulation. One early marker of executive attention development is anticipatory looking, the act of looking to the location of a target prior to its appearance in that location, a process that involves endogenous control of visual orienting. Previous studies have shown that anticipatory looking is positively related to executive attention as assessed by the ability to resolve spatial conflict in 3–4-year-old children. In the current study, anticipatory looking was positively related to cautious behavioral approach in response to non-threatening novel objects in 6- and 7-month-old infants. This finding and previous findings showing the presence of error detection in infancy are consistent with the hypothesis that there is some degree of executive attention in the first year of life. Anticipatory looking was also related to the frequency of distress, to looking away from disturbing stimuli, and to some self-regulatory behaviors. These results may indicate either early attentional regulation of emotion or close relations between early developing fear and later self-regulation. Overall, the results suggest the presence of rudimentary systems of executive attention in infants and support further studies using anticipatory looking as a measure of individual differences in attention in infancy.
Article
The behavior of eight babies was studied longitudinally when facing people and a doll. The eight babies were observed biweekly from 3 to 25 weeks. Among them, five continued to be observed on a monthly basis up to 45 weeks. At each visit the babies were presented with their mother, a female-stranger, and a doll who were alternately active and inactive. Each condition lasted 45 s at 3, 5 and 7 weeks and 60 s thereafter. The results showed that by 5 to 9 weeks the proportion of time babies looked, smiled and vocalized as well as moved their arms toward people differed significantly from those produced to the doll when confounding variables such as familiarity and activity of the stimuli were manipulated. This pattern of early differential responsiveness, together with important developmental changes over the 10-month period, suggests that infants are reacting to communication-related cues in the presence of social stimuli. The implications of these findings for the development of communication are discussed.
Article
The speech register used by adults with infants and young children, known as motherese, is linguistically simplified and characterized by high pitch and exaggerated intonation. This study investigated infant selective listening to motherese speech. The hypothesis tested was that infants would choose to listen more often to motherese when given the choice between a variety of natural infant-directed and adult-directed speech samples spoken by four women unfamiliar to the subjects. Forty-eight 4-month-old infants were tested in an operant auditory preference procedure. Infants showed a significant listening preference for the motherese speech register.
Article
This paper presents a simple and widely ap- plicable multiple test procedure of the sequentially rejective type, i.e. hypotheses are rejected one at a tine until no further rejections can be done. It is shown that the test has a prescribed level of significance protection against error of the first kind for any combination of true hypotheses. The power properties of the test and a number of possible applications are also discussed.
Article
In adults, most cognitive and emotional self-regulation is carried out by a network of brain regions, including the anterior cingulate, insula, and areas of the basal ganglia, related to executive attention. We propose that during infancy, control systems depend primarily upon a brain network involved in orienting to sensory events that includes areas of the parietal lobe and frontal eye fields. Studies of human adults and alert monkeys have shown that the brain network involved in orienting to sensory events is moderated primarily by the nicotinic cholinergic system arising in the nucleus basalis. The executive attention network is primarily moderated by dopaminergic input from the ventral tegmental area. A change from cholinergic to dopaminergic modulation would be a consequence of this switch of control networks and may be important in understanding early development. We trace the attentional, emotional, and behavioral changes in early development related to this developmental change in regulative networks and their modulators.
Article
Prior research has demonstrated intersensory facilitation for perception of amodal properties of events such as tempo and rhythm in early development, supporting predictions of the Intersensory Redundancy Hypothesis (IRH). Specifically, infants discriminate amodal properties in bimodal, redundant stimulation but not in unimodal, nonredundant stimulation in early development, whereas later in development infants can detect amodal properties in both redundant and nonredundant stimulation. The present study tested a new prediction of the IRH: that effects of intersensory redundancy on attention and perceptual processing are most apparent in tasks of high difficulty relative to the skills of the perceiver. We assessed whether by increasing task difficulty, older infants would revert to patterns of intersensory facilitation shown by younger infants. Results confirmed our prediction and demonstrated that in difficult tempo discrimination tasks, 5-month-olds perform like 3-month-olds, showing intersensory facilitation for tempo discrimination. In contrast, in tasks of low and moderate difficulty, 5-month-olds discriminate tempo changes in both redundant audiovisual and nonredundant unimodal visual stimulation. These findings indicate that intersensory facilitation is most apparent for tasks of relatively high difficulty and may therefore persist across the lifespan.