TABLE 1 - uploaded by Rebecca J. Brand
Content may be subject to copyright.
Source publication
The current study addressed the degree to which maternal speech and action are synchronous in interactions with infants. English-speaking mothers demonstrated the function of two toys, stacking rings and nesting cups to younger infants (6-9.5 months) and older infants (9.5-13 months). Action and speech units were identified, and speech units were c...
Context in source publication
Similar publications
Two new Litoria species with affinities to the L. nigropunctata species-group are described on the basis of recently collected material from the base of the Wandammen Peninsula and from the Onin Peninsula, western Papua (formerly Irian Jaya), Province of Indonesia. The descriptions include mainly data on morphology, bioacoustics and osteology.
The evolution of multiple sexual signals presents a dilemma since individuals selecting a mate should pay attention to the most honest signal and ignore the rest; however, multiple signals may evolve if, together, they provide more information to the receiver than either one would alone. Static and dynamic signals, for instance, can act as multiple...
Ikakogi is a behaviorally and morphologically intriguing genus of glassfrog. Using tadpole morphology, vocalizations, and DNA, a new species is described from the Sierra Nevada de Santa Marta (SNSM), an isolated mountain range in northern Colombia. The new taxon is the second known species of the genus Ikakogi and is morphologically identical to I....
Human activities change the acoustic environment in many settings around the world. These changes are complex, as different anthropogenic sound sources create different acoustic profiles; therefore, some sound sources may have greater impacts on wildlife than others. Animals may adapt to these altered acoustic environments by adjusting their vocali...
For male songbirds, song rate varies throughout the breeding season and is correlated with breeding cycle stages. Although these patterns have been well documented, this relationship has not been used to predict a bird's breeding status from acoustic monitoring. This challenge of using a response (i.e., behavior) to indirectly measure an underlying...
Citations
... Infant-directed action. When interacting with infants, caregivers act on objects as they talk about them (e.g., Karmazyn-Raz & Smith, 2023;Meyer et al., 2011;Schatz et al., 2022;Suanda et al., 2016;Suarez-Rivera et al., 2022), and there is evidence that caregivers modify these object-directed actions in ways that are analogous to the modifications observed in IDS. Action demonstrations directed to infants (vs. ...
Everyday caregiver‐infant interactions are dynamic and multidimensional. However, existing research underestimates the dimensionality of infants’ experiences, often focusing on one or two communicative signals (e.g., speech alone, or speech and gesture together). Here, we introduce “infant‐directed communication” (IDC): the suite of communicative signals from caregivers to infants including speech, action, gesture, emotion, and touch. We recorded 10 min of at‐home play between 44 caregivers and their 18‐ to 24‐month‐old infants from predominantly white, middle‐class, English‐speaking families in the United States. Interactions were coded for five dimensions of IDC as well as infants’ gestures and vocalizations. Most caregivers used all five dimensions of IDC throughout the interaction, and these dimensions frequently overlapped. For example, over 60% of the speech that infants heard was accompanied by one or more non‐verbal communicative cues. However, we saw marked variation across caregivers in their use of IDC, likely reflecting tailored communication to the behaviors and abilities of their infant. Moreover, caregivers systematically increased the dimensionality of IDC, using more overlapping cues in response to infant gestures and vocalizations, and more IDC with infants who had smaller vocabularies. Understanding how and when caregivers use all five signals—together and separately—in interactions with infants has the potential to redefine how developmental scientists conceive of infants’ communicative environments, and enhance our understanding of the relations between caregiver input and early learning.
Research Highlights
Infants’ everyday interactions with caregivers are dynamic and multimodal, but existing research has underestimated the multidimensionality (i.e., the diversity of simultaneously occurring communicative cues) inherent in infant‐directed communication.
Over 60% of the speech that infants encounter during at‐home, free play interactions overlap with one or more of a variety of non‐speech communicative cues.
The multidimensionality of caregivers’ communicative cues increases in response to infants’ gestures and vocalizations, providing new information about how infants’ own behaviors shape their input.
These findings emphasize the importance of understanding how caregivers use a diverse set of communicative behaviors—both separately and together—during everyday interactions with infants.
... Indeed, pointing combined with "naming" occurs more commonly in book-sharing than in any other conversational context (Dunn and Wooding, 1977), and is regarded as key to book-sharing's function as a "language acquisition device" (Ninio and Bruner, 1978;Ninio, 1983). Although basic associative processes may contribute to the word-learning afforded by pointing plus naming, the occurrence of this behavior during book-sharing is typically more dynamic than a simple temporal coincidence of auditory and deictic stimuli (Meyer et al., 2011). Thus, parents often use intonational and facial modulation for emphasis as they name the target of their pointing (Nencheva et al., 2021), as well as synchronized gestural animation (Novack and Goldin-Meadow, 2017), particularly when naming depicted actions. ...
Parental reading to young children is well-established as being positively associated with child cognitive development, particularly their language development. Research indicates that a particular, “intersubjective,” form of using books with children, “Dialogic Book-sharing” (DBS), is especially beneficial to infants and pre-school aged children, particularly when using picture books. The work on DBS to date has paid little attention to the theoretical and empirical underpinnings of the approach. Here, we address the question of what processes taking place during DBS confer benefits to child development, and why these processes are beneficial. In a novel integration of evidence, ranging from non-human primate communication through iconic gestures and pointing, archaeological data on Pre-hominid and early human art, to experimental and naturalistic studies of infant attention, cognitive processing, and language, we argue that DBS entails core characteristics that make it a privileged intersubjective space for the promotion of child cognitive and language development. This analysis, together with the findings of DBS intervention studies, provides a powerful intellectual basis for the wide-scale promotion of DBS, especially in disadvantaged populations.
... When putting yourself into a robot's shoes these features become vital to follow an ongoing communication with a human. By using concepts like contingency (Gergely and Watson, 1999) and acoustic packaging (Meyer et al, 2011), robots can not only appear to engage with humans in an interaction, but can tune their input towards their perceptional needs (Fischer et al, 2013;Lohan et al, 2011Lohan et al, , 2012Schillingmann et al, 2009). ...
In this chapter we are presenting an overview on how adaptation of movement and behaviour can favour communication in Human-Robot Interaction (HRI). A model of a communication space based on a action-reaction classification is presented. Past research in HRI is presented for verbal, non-verbal and adaptation of communication. Further, the influence of human aware navigation is discussed and concepts like proxemics, path planing and robot motion are presented. The chapter discusses possible explicated and implicated methods of adaptation as well as it is identifying interruption concepts for communication.
... This finding concerning the high percentages of touch + speech input is consistent with prior research indicating that caregiver-infant communication may be multimodal (Gogate et al. 2000) and suggests that, at least in the tactile modality, touches are often presented multimodally with speech in both the HRA and LRC groups. These findings extend previous reports of multimodal communication in the audio-visual modality (Gogate et al. 2000(Gogate et al. , 2015Meyer et al. 2011) to a relatively understudied modality of audio-tactile interactions. Nonetheless, researchers examining multimodal communication in the audio-visual modality have mainly studied speech with gestures, which are always used as communicative tools (Gogate et al. 2000(Gogate et al. , 2015, while touch may not always have a solely communicative intent (e.g., holding an infant up may support her position, but may also convey affect and a gentle reminder to the infant to engage her muscles). ...
Multimodal communication may facilitate attention in infants. This study examined the presentation of caregiver touch-only and touch + speech input to 12-month-olds at high (HRA) and low risk for ASD. Findings indicated that, although both groups received a greater number of touch + speech bouts compared to touch-only bouts, the duration of overall touch that overlapped with speech was significantly greater in the HRA group. Additionally, HRA infants were less responsive to touch-only bouts compared to touch + speech bouts suggesting that their mothers may use more touch + speech communication to elicit infant responses. Nonetheless, the exact role of touch in multimodal communication directed towards infants at high risk for ASD warrants further exploration.
... In the Spontaneous speech task, the older child explained to his younger sibling (IDS) and to his mother (ADS) how to assemble a Mr. Potato Head toy. In the Storybook speech task, the older child described to his sibling (IDS) and to his mother (ADS) the pictures from the book Frog Where Are You (Meyer, Hard, Brand, McGarvey, & Baldwin, 2011). ...
Do children with hearing loss use infant-directed speech while addressing their normal-hearing (NH) infant siblings? This case-study examined speech characteristics in two age-matched children, one with severe-to-profound hearing loss who used bilateral cochlear implants (CIs) for 5 years (the CI child) and one NH child. Both children explained how to assemble a toy and showed a picture book to their NH infant siblings and their mothers. Prosodic characteristics of mean fundamental frequency, fundamental frequency range, utterance duration, and speech rate were collected from each speech sample. The vowel space area was calculated using tokens of cardinal vowels /i/, /a/, and /u/. Both children modified prosodic characteristics in their speech and the CI child expanded his vowel space area addressing his infant sibling compared to their mother. The CI child modified more prosodic characteristics than the NH child. The results demonstrate an effect of the interactional context independent of child hearing status. The findings suggest the beneficial impact of the cochlear implantation on the ability of the CI child to learn the acoustic difference between infant- and adult-directed speech registers and employ the difference in his own speech production.
... For example, language can help structure action sequences (i.e. acoustic packaging [15,16]) and modulate children's representation of goal-directed actions [2] by highlighting the relevance of an action and guiding children's imitation of these actions (see also [3]). Language can also facilitate the comparison of actions in infants as early as 10 months of age and help infants to understand actions as being goal-directed [17]. ...
... The model comparison between the full and the reduced model was not significant ( χ 2 = 1. 16, d.f. = 4, p = 0.89). ...
Communication with young children is often multimodal in nature, involving, for example, language and actions. The simultaneous presentation of information from both domains may boost language learning by highlighting the connection between an object and a word, owing to temporal overlap in the presentation of multimodal input. However, the overlap is not merely temporal but can also covary in the extent to which particular actions co-occur with particular words and objects, e.g. carers typically produce a hopping action when talking about rabbits and a snapping action for crocodiles. The frequency with which actions and words co-occurs in the presence of the referents of these words may also impact young children’s word learning. We, therefore, examined the extent to which consistency in the co-occurrence of particular actions and words impacted children’s learning of novel word–object associations. Children (18 months, 30 months and 36–48 months) and adults were presented with two novel objects and heard their novel labels while different actions were performed on these objects, such that the particular actions and word– object pairings always co-occurred (Consistent group) or varied across trials (Inconsistent group). At test, participants saw both objects and heard one of the labels to examine whether participants recognized the target object upon hearing its label. Growth curve models revealed that 18-month-olds did not learn words for objects in either condition, and 30-month-old and 36- to 48-month-old children learned words for objects only in the Consistent condition, in contrast to adults who learned words for objects independent of the actions presented. Thus, consistency in the multimodal input influenced word learning in early childhood but not in adulthood. In terms of a dynamic systems account of word learning, our study shows how multimodal learning settings interact with the child’s perceptual abilities to shape the learning experience.
... Segmenting events into individual actions may seem trivial, given that adults rely on top-down knowledge of goals and intentions (Zacks & Tversky, 2001). However, early in development event segmentation poses a greater challenge, as infants may lack topdown knowledge about events they regularly encounter (Baldwin, Baird, Saylor, & Clark, 2001;Meyer, Hard, Brand, McGarvey, & Baldwin, 2011). In the eyes of an infant, a more suitable analogy might be a ballet novice identifying a rond de jambe during a dancer's routine, or a newcomer to basketball identifying the event boundaries of a pick and roll. ...
... Though providing the crucial first evidence for motionese, this method is arguably limited because raters cannot easily focus on only one measure at a time, and might overlook important modulations on an action-level due to the full-interaction scope. Another line of work has successfully used image processing (Nagai & Rohlfing, 2009;Rohlfing et al., 2006), but, like related work on the structural characteristics of motionese (Brand et al., 2007;Meyer, Hard, Brand, McGarvey, & Baldwin, 2011), these cup-stacking studies were not designed to capture modulations specific to instructing novel movements. Hence, this study focused on capturing parents' kinematic modulations of their action demonstrations. ...
Parents tend to modulate their movements when demonstrating actions to their infants. Thus far, these modulations have primarily been quantified by human raters and for entire interactions, thereby possibly overlooking the intricacy of such demonstrations. Using optical motion tracking, the precise modulations of parents’ infant‐directed actions were quantified and compared to adult‐directed actions and between action types. Parents demonstrated four novel objects to their 14‐month‐old infants and adult confederates. Each object required a specific action to produce a unique effect (e.g. rattling). Parents were asked to demonstrate an object at least once before passing it to their demonstration partner, and they were subsequently free to exchange the object as often as desired. Infants’ success at producing the objects’ action‐effects was coded during the demonstration session and their memory of the action‐effects was tested after a several‐minute delay. Indicating general modulations across actions, parents repeated demonstrations more often, performed the actions in closer proximity and demonstrated action‐effects for longer when interacting with their infant compared to the adults. Meanwhile, modulations of movement size and velocity were specific to certain action‐effect pairs. Furthermore, a ‘just right’ modulation of proximity was detected, since infants’ learning, memory, and parents’ prior evaluations of their infants’ motor abilities, were related to demonstrations that were performed neither too far from nor too close to the infants. Together, these findings indicate that infant‐directed action modulations are not solely overall exaggerations but are dependent upon the characteristics of the to‐be learned actions, their effects, and the infant learners.
... Infants discover word meanings during close interactions with caregivers Important research from the ecological-dynamic systems approach and the embodied cognition approach establishes that caregivers' speech and gestures guide infants' attention to events and inform their processing of spoken words and meanings (Abu-Zhaya, Seidl, & Cristia, 2017;Brand & Tapscott, 2007;de V. Rader & Zukow-Goldring, 2010, 2012Gogate, Bahrick, & Watson, 2000;Gogate, Bolzani, & Betancourt, 2006;Gogate, Maganti, & Bahrick, 2015;Gogate, Maganti, & Laing, 2013;Matatyaho & Gogate, 2008;Matatyaho-Bullaro, Gogate, Mason, Cadavid, & Abdel-Mottaleb, 2014;Meyer, Hard, Brand, McGarvey, & Baldwin, 2011;Nomikou, Koke, & Rohlfing, 2017;Nomikou & Rohlfing, 2011;Yu & Smith, 2012Zukow, 1989;Zukow-Goldring, 1997). These studies focused on infants' attention to synchronous auditory and visual input from caregivers during close interactions, e.g., talking while acting on objects, although some also examine synchronous auditory-visual-tactile input. ...
... These studies share a common perspective that infants are experiencing multimodal events defined by spatial and temporal information that is redundant across modalities (Bahrick & Lickliter, 2000Gibson, 1969Gibson, , 1988Gibson & Pick, 2000;Gogate & Hollich, 2010Gogate et al., 2001). Additionally, Meyer et al. (2011) examined the timing of caregiver speech and goal-directed actions using the idea that caregivers' speech provides acoustic packaging of event information (Brand & Tapscott, 2007;Hirsh-Pasek & Golinkoff, 1996). They showed that mothers who were asked to demonstrate the actions of various toys to their 6-to 13-month-olds temporally coordinated verbal descriptions of ongoing actions with the actual actions more often than they coordinated actions with other speech tokens. ...
Touch cues might facilitate infants’ early word comprehension and explain the early understanding of body part words. Parents were instructed to teach their infants, 4- to 5-month-olds or 10- to 11-month-olds, nonce words for body parts and a contrast object. Importantly, they were given no instructions about the use of touch. Parents spontaneously synchronized the location and timing of their touches on the infant’s body with the body part nonce word that they were teaching. Their unrelated, mismatched words and touches did not show synchrony. Moreover, when parents used this synchronized speech and touch input, infants looked more frequently to the target locations on their bodies and to their parent’s faces compared to when the input was mismatched or only speech. Similar to other research on parents’ use of speech and gesture input, these results show how parents use multimodal input and raise the possibility that infants may use touch cues to segment and map early words.
... We know that from a developmental perspective there are gestures that occur in conjunction with object learning in children. Caregivers use movements and particular gestures, e.g., looming and pointing, to highlight on the one hand new object words and new object relations on the other hand [78][79][80][81][82][83][84][85]. More details on word acquisition and object labels can be found in Cangelosi et al. [86]. ...