Article

Fetuses Respond to Father's Voice But Prefer Mother's Voice after Birth

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Fetal and newborn responding to audio-recordings of their father's versus mother's reading a story were examined. At home, fathers read a different story to the fetus each day for 7 days. Subsequently, in the laboratory, continuous fetal heart rate was recorded during a 9 min protocol, including three, 3 min periods: baseline no-sound, voice (mother or father), postvoice no-sound. Following a 20 min delay, the opposite voice was delivered. Newborn head-turning was observed on 20 s trials: three no-sound, three voice (mother or father), three opposite voice, three no-sound trials with the same segment of each parent's recording. Fetuses showed a heart rate increase to both voices which was sustained over the voice period. Consistent with prior reports, newborns showed a preference for their mother's but not their father's voice. The characteristics of voice stimuli that capture fetal attention and elicit a response are yet to be identified. © 2013 Wiley Periodicals, Inc. Dev Psychobiol.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... A number of researchers report that fetuses are oriented to their mother's voice, as a special stimulus, both as an internal and an external auditory percept (Querleu et al. 1989;Richards et al. 1992), and that they show specific reactivity to their mother's voice (Marx and Nagy 2015). For example, they can discriminate between maternal and unfamiliar voices (Kisilevsky et al. 2003(Kisilevsky et al. , 2009; they respond with a decrease in motor activity in the 10 s following the onset of maternal speech (Voegtline et al. 2013) and their physiological and motor responses depend on whether the maternal voice was presented live (increase in cardiac rate) or through an audio recording (cardiac deceleration) (Lee and Kisilevsky 2014). A specific response to the mother voice was also observed at a neurological level in a study using functional magnetic resonance imaging (fMRI) in fetuses (Jardri et al. 2012). ...
... Moreover, fetuses can differentiate between male and female voices (Lecanuet et al. 2000), and they react both to the father's and the mother's voices by an increase in heart rate (Lee and Kisilevsky 2014). Another study investigated the effect of the father's voice on fetal heart rate behavior (Kisilevsky et al. 2009). ...
... More specifically, it has been known for a long time that term newborns show a marked preference for the mother's voice over unfamiliar female voices (DeCasper and Fifer 1980;Ockleford et al. 1988). Regarding the effect of the father's voice, term newborns are responsive to their fathers' voice and show, at birth, a similar heart rate increase to maternal and paternal voice presentations (Lee and Kisilevsky 2014). Even though they discriminate between the father's voice and that of other males (DeCasper and Prescott 1984), they show however no preference for the father's voice as late as 4 months-of-age (Ward and Cooper 1999). ...
Article
Full-text available
Preterm infants’ behavioral state and physiological parameters are affected by environmental noise and adult voices. Only a handful of studies have explored the effects of direct maternal vocal communication on preterm infants’ autonomous nervous system responses. Furthermore, to our knowledge, no study to date has investigated the effect of the father’s voice on preterm infant’s behaviors and physiological parameters. This study evaluated the effects of both mothers’ and fathers’ infant-directed speech on preterm infants’ behavioral states. Fourteen stable, premature infants serving as their own controls were videotaped while their mother and father were speaking to them for 5 min over 2 consecutive days. Infants’ behavioral states and state lability were coded for each voice presentation (father and mother), in the three different conditions, before, during, and after the intervention. Present results show an interaction between vocal intervention and infant behavioral state. Both maternal and paternal speech modified infant behavioral state, but no significant difference in the behavioral state distribution was observed between mother’s and father’s voice presentation. Infants spent more time in a quiet alert state when they heard both voices compared to no vocalization baseline. These findings indicate the importance of both the fathers’ and the mothers’ voice for preterm infants. The parental vocal intervention has an awakening effect. Further studies are needed to better identify the benefits for preterm infants of a relational care approach.
... Equally, particular stimuli present in utero have the potential to influence the development of the related sensory apparatus in ways that embed particular response outputs, so that if encountered postnatally these stimuli will elicit the preprogrammed responses and thereby influence neonatal behaviours. In the cases where this is known to occur, it has been characterised as trans-natal gustatory (taste), olfactory (smell), or auditory (sound) continuity, where the prenatal embedding of specific response capabilities has been characterised as "learning" and the postnatal accessing of that functionality has been considered to represent "memory" (e.g., [5,30,[38][39][40][41][42][43][44][45]) (see Section 5). ...
... In general, for a sensory system to become sufficiently functional to generate a conscious experience of its particular modality, operational connectivity is required between modality-specific receptors and their peripheral, spinal, and brainstem nerve pathways, or their dedicated cranial nerves, which then direct sensory impulse traffic to specific subcortical and cortical regions of the brain for the processing that gives rise to each sensation (e.g., [34][35][36][37][38][39][40][41][42][43][44][45][46][47]). Note, however, that cortically supported conscious experience of, and cognitively directed responses to, particular sensations in mammalian young cannot occur before the establishment of neural connectivity between the cerebral cortex and the subcortical regions of the brain; note also that the timing of when this connectivity occurs in relation to birth depends on the neurological maturity of the young at birth (for references see [25,35]) (also see Section 3.1). ...
... Contrary to earlier views that the embryo/fetus occupies a sensory void in utero (for references see [48]), it is now well established that there exists a wide range of stimuli that have demonstrable impacts on sensory systems as they develop (e.g., [5,30,[38][39][40][41][42][43][44]46,48,49,66]). ...
Article
Full-text available
Presented is an updated understanding of the development of sensory systems in the offspring of a wide range of terrestrial mammals, the prenatal exposure of those systems to salient stimuli, and the mechanisms by which that exposure can embed particular sensory capabilities that prepare newborns to respond appropriately to similar stimuli they may encounter after birth. Taken together, these are the constituents of the phenomenon of “trans-natal sensory continuity” where the embedded sensory capabilities are considered to have been “learnt” and, when accessed subsequently, they are said to have been “remembered”. An alternative explanation of trans-natal sensory continuity is provided here in order to focus on the mechanisms of “embedding” and “accessing” instead of the potentially more subjectively conceived outcomes of “learning” and “memory”. Thus, the mechanistic concept of “intrauterine sensory entrainment” has been introduced, its foundation being the well-established neuroplastic capability of nervous systems to respond to sensory inputs by reorganising their neural structures, functions, and connections. Five conditions need to be met before “trans-natal sensory continuity” can occur. They are (1) sufficient neurological maturity to support minimal functional activity in specific sensory receptor systems in utero; (2) the presence of sensory stimuli that activate their aligned receptors before birth; (3) the neurological capability for entrained functions within specific sensory modalities to be retained beyond birth; (4) specific sensory stimuli that are effective both before and after birth; and (5) a capability to detect those stimuli when or if they are presented after birth in ways that differ (e.g., in air) from their presentation via fluid media before birth. Numerous beneficial outcomes of this process have been reported for mammalian newborns, but the range of benefits depends on how many of the full set of sensory modalities are functional at the time of birth. Thus, the breadth of sensory capabilities may be extensive, somewhat restricted, or minimal in offspring that are, respectively, neurologically mature, moderately immature, or exceptionally immature at birth. It is noted that birth marks a transition from intrauterine sensory entrainment to extrauterine sensory entrainment in all mammalian young. Depending on their neurological maturity, extrauterine entrainment contributes to the continuing maturation of the different sensory systems that are operational at birth, the later development and maturation of the systems that are absent at birth, and the combined impact of those factors on the behaviour of newborn and young mammals. Intrauterine sensory entrainment helps to prepare mammalian young for life immediately after birth, and extrauterine sensory entrainment continues this process until all sensory modalities develop full functionality. It is apparent that, overall, extrauterine sensory entrainment and its aligned neuroplastic responses underlie numerous postnatal learning and memory events which contribute to the maturation of all sensory capabilities that eventually enable mammalian young to live autonomously.
... When the maternal voice is presented in an audio recording, an increase in cardiac rate is detected [20,21]. On the other hand, a cardiac deceleration was found when the fetus was exposed to the mother's voice in a live condition [22]. The cardiac acceleration observed when the fetus is submitted to a familiar stimulus (the mother's voice) in an unfamiliar manner (the recorded maternal voice) may be considered a reaction to novelty. ...
... Few studies focused on the father's voice. Interestingly, after birth, newborns prefer their mothers' voices as opposed to their fathers' voices [22]. Another relevant finding is that fetal reactivity to maternal voice is dependent on the fetal neurological maturation There were no significant differences between fetal gross body movements in response to recorded maternal vs. unfamiliar female voice. ...
... Lee & Kisilevsky [22] ...
Article
Full-text available
Background Studies have shown pre-natal memory underlining the ability of newborns to discriminate maternal vs. other voices and to recognize linguistic stimuli presented prenatally by the mother. The fetus reacts to maternal voice at the end of gestation but it is important to clarify the indicators and conditions of these responses. Objective To understand the state of the art concerning: 1) indicators of fetal reactions to maternal voice vs. other voices; 2) conditions of maternal voice required to obtain fetal response, 3) neonatal recognition of maternal voice and of linguistic material presented prenatally and 4) obstetric and behavioral maternal conditions compromising fetal ability to discriminate between maternal and other female voices. Method Systematic review using EBSCO, WEBSCIENCE and MEDLINE. Eligibility: studies with maternal voice delivered before birth as stimulus and with fetal or neonatal behavior as responses. Results: Fetal responses to maternal voice are observed through fetal cardiac, motor (fetal yawning decrease, mouth opening, fetal body movements) and brain responses (activation of the lower bank of the left temporal lobe). Newborns’ head orientation and non-nutritive sucking are shown as being neonatal indicators. Conclusion Gestational age, baseline measures (fetal state, acoustic conditions and pre-stimulus time) and obstetrical conditions may enable or compromise fetal discrimination between maternal and other voices. The role of maternal voice for prenatal human bonding needs to be discussed according to different maternity conditions such as surrogate mothers. A new paradigm is suggested; the focus of research should be on maternal-fetal interaction under the presence of maternal voice.</P
... It can be predicted that fetal breathing rate will be impacted as the prenate focuses attention on mom's voice, or on other voices the prenate can hear. There are also experimental evidences that prenates not only can hear and distinguish multiple voices before they are born (Walton & Bower, 1993;Kaye & Bower, 1994;Aldridge et al., 1999) but that they have a special interest in the voice that pertains to the body of the person in which they find themselves (Sai, 2005;Mampe et al., 2009;Lee & Kisilevsky, 2014;Marx & Nagy, 2015). Although the relevant research shows plainly (Campbell, 2004;Moon, 2017) that human prenates and neonates are extremely interested in streams of speech in their native languageindicators include breathing rate, heart rate, speed and strength of sucking movements, electrical skin potential, pupil dilation, and so forth (Salvador & Koos, 1989;Walton & Bower, 1993;Aldridge et al., 1999;Sussman et al., 2016) -they do not, and logically cannot, begin to decipher sounds ahead of whole intonations over entire breath groups Provasi et al., 2014;Provasi et al., 2014). ...
... Sai's research also demonstrated this fact. After the infants constructed the TNR diagrammed as Figure 33, based on access to the appropriate factual experience, they could easily solve the fictional cases as others had already demonstrated (Walton & Bower, 1993;Kaye & Bower, 1994;Aldridge et al., 2001;Patterson & Werker, 2003;Voegtline et al., 2013;Lee & Kisilevsky, 2014;Marx & Nagy, 2015;Moon, 2017). They could imagine mom's moving face when only hearing her voice, or upon seeing her face, they evidently expected to hear her voice. ...
Book
Full-text available
The human language capacity stands at the very top of the intellectual abilities of us human beings, and it ranks incommensurably higher than the intellectual powers of any other organism or any robot. It vastly exceeds the touted capacities of "artificial intelligence" with respect to creativity, freedom of will (control of thoughts and words), and moral responsibility. These are traits that robots cannot possess and that can only be understood by human beings. They are no part of the worlds of robots and artificial intelligences, but those entities, and all imaginable fictions, etc., are part of our real world... True narrative representations (TNRs) can express and can faithfully interpret every kind of meaning or form in fictions, errors, lies, or nonsensical strings seeming in any way to be representations. None of the latter, however, can represent even the simplest TNR ever created by an intelligent person. It has been proved logically, in the strictest forms of mathematical logic, that all TNRs that seem to have been produced by mechanisms, robots, or artificial intelligence, must be contained within a larger and much more far-reaching TNR that cannot be explained mechanistically by any stretch of imagination. These unique constructions of real intelligence, that is, genuine TNRs, (1) have the power to determine actual facts; (2) are connected to each other in non-contradictory ways, and (3) are generalizable to all contexts of experience to the extent of the similarities of those contexts up to a limit of complete identity. What the logicomathematical theory of TNRs has proved to a fare-thee-well is that only TNRs have the three logical properties just iterated. No fictions, errors, lies, or any string of nonsense has any of those unique formal perfections. The book is about how the human language capacity is developed over time by human beings beginning with TNRs known to us implicitly and actually even before we are born. All scientific endeavors, all the creations of the sciences, arts, and humanities, all the religions of the world, and all the discoveries of experience utterly depend on the prior existence of the human language capacity and our power to comprehend and produce TNRs. Without it we could not enjoy any of the fruits of human experience. Nor could we appreciate how things go wrong when less perfect representations are mistaken, whether accidentally or on purpose, for TNRs. In biology, when DNA, RNA, and protein languages are corrupted, the proximate outcome is disorder, followed by disease if not corrected, and, in the catastrophic systems failures known as death in the long run. The book is about life and death. Both are dependent on TNRs in what comes out to be an absolute dependency from the logicomathematical perspective. Corrupt the TNRs on which life depends, and death will follow. Retain and respect TNRs and life can be preserved. However, ultimate truth does not reside in material entities or the facts represented by TNRs. It resides exclusively in the TNRs themselves and they do not originate from material entities. They are from God Almighty and do not depend at all on any material thing or body. TNRs outrank the material facts they incorporate and represent. It may seem strange, but the result is more certain, I believe, than the most recent findings of quantum physics. Representations are connected instantaneously. Symbol speed is infinitely faster than the speed of light. In the larger perspective of history, when TNRs are deliberately corrupted, the chaos of wars, pestilence, and destruction follows as surely as night follows day. The human language capacity makes us responsible in a unique manner for our thoughts, words, and actions. While it is true that no one ever asked us if we wanted to have free will or not, the fact that we have it can be disputed only by individuals who engage in a form of self-deception that borders on pathological lying, the kind that results when the deceiver can no longer distinguish between the actions he or she actually performed in his or her past experience and the sequences of events that he or she invented to avoid taking responsibility for those events, or to take credit for actions he or she never performed. On the global scale such misrepresentations lead to the sort of destruction witnessed at Sodom in the day of Abraham. That historical destruction has recently been scientifically revealed at the site of Tall el-Hammam in Jordan. More about that and all of the foregoing in the book. If you encounter errors, please point them out to the author at joller@bellsouth.net. Thank you.
... For example, fetal movements in response to sounds coming from outside the abdomen (as visualized by ultrasound imaging) were found from 19 weeks of pregnancy onwards, demonstrating that fetuses are able to hear at that gestational age (Hepper & Shahidullah, 1994). Also, fetal heart rate was found to increase when hearing mother's voice, as well as father's voice after daily exposure for a week prior to testing, indicating that fetuses respond to familiar voices (Lee & Kisilevsky, 2014). Moreover, fetuses were found to touch the uterus wall longer in response to mothers' touch on her abdomen, and this increased from the second to third trimester of pregnancy (Marx & Nagy, 2017). ...
... Father also receives information on fetal development corresponding to the gestational age, and sensory abilities of his unborn baby. In the first session, it is explained to the father that, at a certain stage, babies are capable of hearing voices coming from outside the abdomen and can recognize father's voice (Lee & Kisilevsky, 2014). In the second session, father is told that babies can remember rhythms and music during pregnancy and even after birth when heard regularly (Granier-Deferre et al., 2011). ...
Article
Full-text available
Although parenting interventions including expectant fathers are scarce, they yield promising results. The Prenatal Video‐feedback Intervention to promote Positive Parenting (VIPP‐PRE) is a recently developed intervention, that is both manualized and personalized, aiming to enhance paternal sensitivity and involvement before the birth of the baby. Illustrating the intervention process, the current study presents two case studies of expectant fathers receiving VIPP‐PRE (clinical trial registration NL62696.058.17). The VIPP‐PRE program is described along with the individual dyads’ prenatal video fragments and feedback specific for each father‐fetus dyad. In addition, changes in paternal sensitivity and involvement levels are presented, as well as fathers’ and intervener's evaluation of the intervention. VIPP‐PRE promises to be a feasible short‐term and potentially effective parenting intervention for expectant fathers. Currently, a randomized controlled trial (RCT) is under review that systematically investigates the efficacy of the VIPP‐PRE. Here we aim to provide further information on the intervention process, as well as fathers’ and intervener's evaluations of this process, and the benefits of using ultrasound imaging in a parenting intervention.
... this is typically specific to mother preference (Lee & Kisilevsky, 2014) due to familiarity and high pitch. We had to record our set stimuli in advance and so were unable to use each infant's own mother's voice per study, but future studies with infants could try to do this to optimise attention and engagement. ...
... We know that infants have a preference for a female over a male's voice (Decasper & Prescott, 1984) and so perhaps a distractor male voice would have been easier for the babies to inhibit. Additionally, the preference for a woman's voice in infants is typically mother specific (Lee & Kisilevsky, 2014) due to aspects concerning familiarity. We had to record our set stimuli in advance and so were unable to use each infant's own mother's voice per study, but future studies with infants could try to do this to optimise attention and engagement with regards to a target speaker. ...
Conference Paper
This thesis aimed to explore the neural mechanisms of language processing in infants under 12 months of age by using EEG measures of speech processing. More specifically, I wanted to investigate if infants are able to engage in the auditory neural tracking of continuous speech and how this processing can be modulated by infant attention and different linguistic environments. Limited research has investigated this phenomenon of neural tracking in infants and the potential effects that this may have on later language development. Experiment 1 set the groundwork for the thesis by establishing a reliable method to measure cortical entrainment by 36 infants to the amplitude envelope of continuous speech. The results demonstrated that infants have entrainment to speech much like has been found in adults. Additionally, infants show a reliable elicitation of the Acoustic Change Complex (ACC). Follow up language assessments were conducted with these infants approximately two years later; however, no significant predictors of coherence on later language outcomes were found. The aim of Experiment 2 was to discover how neural entrainment can be modulated by infant attention. Twenty infants were measured on their ability to selectively attend to a target speaker while in the presence of a distractor of matching acoustic intensity. Coherence values were found for the target, the distractor and for the dual signal (both target and distractor together). Thus, it seems that infant attention may be fluctuating between the two speech signals leading to them entraining to both simultaneously. However, the results were not clear. Thus, Experiment 3 expanded on from Experiment 2. However, now EEG was recorded from 30 infants who listened to speech with no acoustic interference and speech-in-noise with a signal-to-noise ratio of 10dB. Additionally, it was investigated whether bilingualism has any potential effects on this process. Similar coherence values were observed when infants listened to speech in both conditions (quiet and noise), suggesting that infants successfully inhibited the disruptive effects of the masker. No effects of bilingualism on neural entrainment were present. For the fourth study we wanted to continue investigating infant auditory-neural entrainment when exposed to more varying levels of background noise. However, due to the COVID-19 pandemic all testing was moved online. Thus, for Experiment 4 we developed a piece of online software (the memory card game) that could be used remotely. Seventy three children ranging from 4 to 12 years old participated in the online experiment in order to explore how the demands of a speech recognition task interact with masker type and language and how this changes with age during childhood. Results showed that performance on the memory card game improved with age but was not affected by masker type or language background. This improvement with age is most likely a result of improved speech perception capabilities. Overall, this thesis provides a reliable methodology for measuring neural entrainment in infants and a greater understanding of the mechanisms of speech processing in infancy and beyond.
... this is typically specific to mother preference (Lee & Kisilevsky, 2014) due to familiarity and high pitch. We had to record our set stimuli in advance and so were unable to use each infant's own mother's voice per study, but future studies with infants could try to do this to optimise attention and engagement. ...
... We know that infants have a preference for a female over a male's voice (Decasper & Prescott, 1984) and so perhaps a distractor male voice would have been easier for the babies to inhibit. Additionally, the preference for a woman's voice in infants is typically mother specific (Lee & Kisilevsky, 2014) due to aspects concerning familiarity. We had to record our set stimuli in advance and so were unable to use each infant's own mother's voice per study, but future studies with infants could try to do this to optimise attention and engagement with regards to a target speaker. ...
Thesis
Full-text available
This thesis aimed to explore the neural mechanisms of language processing in infants under 12 months of age by using EEG measures of speech processing. More specifically, I wanted to investigate if infants are able to engage in the auditory neural tracking of continuous speech and how this processing can be modulated by infant attention and different linguistic environments. Limited research has investigated this phenomenon of neural tracking in infants and the potential effects that this may have on later language development. Experiment 1 set the groundwork for the thesis by establishing a reliable method to measure cortical entrainment by 36 infants to the amplitude envelope of continuous speech. The results demonstrated that infants have entrainment to speech much like has been found in adults. Additionally, infants show a reliable elicitation of the Acoustic Change Complex (ACC). Follow up language assessments were conducted with these infants approximately two years later; however, no significant predictors of coherence on later language outcomes were found. The aim of Experiment 2 was to discover how neural entrainment can be modulated by infant attention. Twenty infants were measured on their ability to selectively attend to a target speaker while in the presence of a distractor of matching acoustic intensity. Coherence values were found for the target, the distractor and for the dual signal (both target and distractor together). Thus, it seems that infant attention may be fluctuating between the two speech signals leading to them entraining to both simultaneously. However, the results were not clear. Thus, Experiment 3 expanded on from Experiment 2. However, now EEG was recorded from 30 infants who listened to speech with no acoustic interference and speech-in-noise with a signal-to-noise ratio of 10dB. Additionally, it was investigated whether bilingualism has any potential effects on this process. Similar coherence values were observed when infants listened to speech in both conditions (quiet and noise), suggesting that infants successfully inhibited the disruptive effects of the masker. No effects of bilingualism on neural entrainment were present. For the fourth study we wanted to continue investigating infant auditory- neural entrainment when exposed to more varying levels of background noise. However, due to the COVID-19 pandemic all testing was moved online. Thus, for Experiment 4 we developed a piece of online software (the memory card game) that could be used remotely. Seventy three children ranging from 4 to 12 years old participated in the online experiment in order to explore how the demands of a speech recognition task interact with masker type and language and how this changes with age during childhood. Results showed that performance on the memory card game improved with age but was not affected by masker type or language background. This improvement with age is most likely a result of improved speech perception capabilities. Overall, this thesis provides a reliable methodology for measuring neural entrainment in infants and a greater understanding of the mechanisms of speech processing in infancy and beyond.
... In the last months of pregnancy, foetuses are particularly attuned to acoustic cues from the mother (Ferrari et al., 2016) and they are capable of detecting, recognizing, responding, and remembering some characteristics of her voice (Kisilevsky et al., 2003;Voegtline, Costigan, Pater, & &DiPetro, J. A., 2013). Comparisons of foetuses'/neonates' recognition and preference for maternal and paternal voice, however, provide contradictory evidence such as, (a) foetuses demonstrate a heart rate increase to both voices before birth, but after birth they turn their heads more often towards their mother's voice and away from their father's voice (Lee & Kisilevsky, 2013), or (b) a heart rate increase is observed in foetuses during playback of the mother's but not of the father's voice though following the offset of the father's voice, they show a brief heart rate increase (Kisilevsky et al., 2009). Moreover, De Casper and Prescott (1984) showed that newborns did not show preference for their father's voice over another male voice though they could discriminate between the two. ...
... Moreover, De Casper and Prescott (1984) showed that newborns did not show preference for their father's voice over another male voice though they could discriminate between the two. Taken together these studies imply a differentiated effect of maternal and paternal voice on fetal and neonates' perception and/or memory (Lee & Kisilevsky, 2013) and some variation in the development of foetuses'/neonates' responses to maternal and paternal voice. Condon and Sander's pioneering work in 1974 showed that as early as the first day of life, the newborn in an awake-active state, sensing the timing of an adult's expressions, may make configurations of body movements in precise synchrony with the syllables and phrases of an adult's speech (Condon & Sander, 1974a, 1974b. ...
Article
We studied the timing of the spontaneous vocalization that occurs in dyadic interactions of fathers and their neonates. We recorded 21 fathers speaking to their 2 to 4‐day‐old newborns at the maternity ward and accurately coded all beginnings and endings of paternal and neonatal vocalization, using sound visualization software. Temporal relations between successive and overlapping newborn and father vocalizations were analysed. Results strongly suggest that newborn infants' vocalization timing is related to the timing of fathers' speech and that both newborns and fathers respond to each other within a 1–3 s temporal window, giving rise to sequences of turn‐taking. This study not only shows newborns' awareness of the timing of their partner's expressions but also fathers' readiness to communicate with them right from birth. We discuss the relevance of these findings to the theory of innate intersubjectivity.
... In addition, fetuses encode some suprasegmental characteristics of speech (intonation and rhythm) into memory (DeCasper and Spence, 1986). When fetal responses to the recorded maternal and paternal voices were compared, contrasting evidence showed the following: (a) that an increase in the heart was observed during exposure to the mother's but not father's voice (following the offset of the father's voice, fetuses responded with a brief heart rate increase) (Kisilevsky et al., 2009;Lee and Kisilevsky, 2013); and (b) fetuses responded in a similar manner to both maternal and paternal voices before birth. After birth, neonates showed a preference for the mother's voice and no preference for the father's voice (Lee and Kisilevsky, 2013). ...
... When fetal responses to the recorded maternal and paternal voices were compared, contrasting evidence showed the following: (a) that an increase in the heart was observed during exposure to the mother's but not father's voice (following the offset of the father's voice, fetuses responded with a brief heart rate increase) (Kisilevsky et al., 2009;Lee and Kisilevsky, 2013); and (b) fetuses responded in a similar manner to both maternal and paternal voices before birth. After birth, neonates showed a preference for the mother's voice and no preference for the father's voice (Lee and Kisilevsky, 2013). ...
Article
Full-text available
The present study investigates the way infants express their emotions in relation to parental feelings between maternal and paternal questions and direct requests. We therefore compared interpersonal engagement accompanying parental questions and direct requests between infant–mother and infant–father interactions. We video-recorded spontaneous communication between 11 infant–mother and 11 infant–father dyads—from the 2nd to the 6th month—in their home. The main results of this study are summarized as follows: (a) there are similarities in the way preverbal infants use their affections in spontaneous interactions with their mothers and fathers to express signs of sensitivity in sharing knowledge through questions and direct requests; and (b) the developmental trajectories of face-to-face emotional coordination in the course of parental questions descend in a similar way for both parents across the age range of this study. Regarding the developmental trajectories of emotional non-coordination, there is evidence of a linear trend in terms of age difference between the parents’ gender with fathers showing the steeper slope. The results are discussed in relation to the theory of intersubjectivity.
... As the medical needs of the infant limit closeness during the hospitalization, a way to interact and build a connection as well as improve the infant's postnatal development is to use the maternal voice. Due to prenatal auditory exposure, newborns have a tendency to prefer their mother's voice (DeCasper & Spence, 1986;Lee & Kisilevsky, 2013) and also to respond to their father's voice (Lee & Kisilevsky, 2013). Studies have suggested that listening to recorded mother's voice evokes both behavioral and physiological responses in full-term infants, such as improved sucking behavior (DeCasper & Fifer, 1980) and decelerated heart rate (Fifer & Moon, 1994). ...
... As the medical needs of the infant limit closeness during the hospitalization, a way to interact and build a connection as well as improve the infant's postnatal development is to use the maternal voice. Due to prenatal auditory exposure, newborns have a tendency to prefer their mother's voice (DeCasper & Spence, 1986;Lee & Kisilevsky, 2013) and also to respond to their father's voice (Lee & Kisilevsky, 2013). Studies have suggested that listening to recorded mother's voice evokes both behavioral and physiological responses in full-term infants, such as improved sucking behavior (DeCasper & Fifer, 1980) and decelerated heart rate (Fifer & Moon, 1994). ...
Article
Full-text available
Introduction Preterm birth may disturb the typical development of the mother–infant relationship, when physical separation and emotional distress in the neonatal intensive care unit may increase maternal anxiety and create challenges for early interaction. This cluster-randomized controlled trial examined the effects of maternal singing during kangaroo care on mothers’ anxiety, wellbeing, and the early mother–infant relationship after preterm birth. Method In the singing intervention group, a certified music therapist guided the mothers (n = 24) to sing or hum during daily kangaroo care during 33–40 gestational weeks (GW). In the control group, the mothers (n = 12) conducted daily kangaroo care without specific encouragement to sing. Using a convergent mixed methods design, the quantitative outcomes included the State-Trait Anxiety Inventory (STAI) at 35 GW and 40 GW to assess the change in maternal-state anxiety levels and parent diaries to examine intervention length. Post-intervention, the singing intervention mothers completed a self-report questionnaire consisting of quantitative and qualitative questions about their singing experiences. Results The mothers in the singing intervention group showed a statistically significant decrease in STAI anxiety levels compared to the control group mothers. According to the self-report questionnaire results, maternal singing relaxed both mothers and infants and supported their relationship by promoting emotional closeness and creating early interaction moments. Discussion Maternal singing can be used during neonatal hospitalization to support maternal wellbeing and early mother–infant relationship after preterm birth. However, mothers may need information, support, and privacy for singing.
... Especially the mother's voice appears to have a vital role in infants' language processing as it is the one voice infants are most familiar with from as early as their hearing develops. Already in utero, fetuses react with a higher heart rate to hearing their mother's voice (Kisilevsky et al., 2003(Kisilevsky et al., , 2009Lee & Kisilevsky, 2014), a reaction which starts around 32-34 weeks' gestational age (Kisilevsky & Hains, 2011). Moreover, newborns show a preference for their mother's voice (DeCasper & Fifer, 1980;DeCasper & Prescott, 1984;Hepper, Scott, & Shahidullah, 1993;Lee & Kisilevsky, 2014). ...
... Already in utero, fetuses react with a higher heart rate to hearing their mother's voice (Kisilevsky et al., 2003(Kisilevsky et al., , 2009Lee & Kisilevsky, 2014), a reaction which starts around 32-34 weeks' gestational age (Kisilevsky & Hains, 2011). Moreover, newborns show a preference for their mother's voice (DeCasper & Fifer, 1980;DeCasper & Prescott, 1984;Hepper, Scott, & Shahidullah, 1993;Lee & Kisilevsky, 2014). In addition, 6-to 9-month-olds show enhanced speech segmentation skills (Barker & Newman, 2004) and word comprehension (Parise & Csibra, 2012) in challenging listening situations only when they listen to their own mother. ...
Article
The maternal voice appears to have a special role in infants’ language processing. The current eye‐tracking study investigated whether 24‐month‐olds (n = 149) learn novel words easier while listening to their mother's voice compared to hearing unfamiliar speakers. Our results show that maternal speech facilitates the formation of new word–object mappings across two different learning settings: a live setting in which infants are taught by their own mother or the experimenter, and a prerecorded setting in which infants hear the voice of either their own or another mother through loudspeakers. Furthermore, this study explored whether infants’ pointing gestures and novel word productions over the course of the word learning task serve as meaningful indexes of word learning behavior. Infants who repeated more target words also showed a larger learning effect in their looking behavior. Thus, maternal speech and infants’ willingness to repeat novel words are positively linked with novel word learning.
... Recognition processes, especially in the auditory domain, start to develop early during ontogeny (Fagan, 1973;Pascalis and de Haan, 2003;Jabès and Nelson, 2015) possibly connected to a relatively early maturation of several brain regions that are known to subserve memory formation such as the hippocampus (Seress et al., 2001;Huber and Born, 2014). For example, newborns and even fetuses show a preference for their own mother's voice compared to the voice of a stranger (DeCasper and Fifer, 1980;Lee and Kisilevsky, 2014). This suggests that infants form long-term representations of familiar voices and use them to guide the short-term processing of incoming auditory information (like environmental sounds, voices, etc.) to identify their mother. ...
... It can be considered a precursor of the adult early P300 which has been related to a switch from automatic to attentional processing (e.g., Polich and Kok, 1995;Polich, 2007). This kind of relevance evaluation might be the basis of infants showing a preference for their own mother's voice very early during ontogeny (DeCasper and Fifer, 1980;Lee and Kisilevsky, 2014). The existence of a long-term memorymodulated MMR component fits nicely with the notion that familiarity with a voice or phonemes of our native language, as well as trained auditory patterns, influence the magnitude and time course of change detection as reflected in the ERP response (Näätänen et al., 1997Cheour et al., 1998Cheour et al., , 2002Atienza and Cantero, 2001;Tervaniemi et al., 2001). ...
Article
Full-text available
Auditory event-related potentials (ERPs) have been successfully used in adults as well as in newborns to discriminate recall of longer-term and shorter-term memories. Specifically the Mismatch Response (MMR) to deviant stimuli of an oddball paradigm is larger if the deviant stimuli are highly familiar (i.e., retrieved from long-term memory) than if they are unfamiliar, representing an immediate change to the standard stimuli kept in short-term memory. Here, we aimed to extend previous findings indicating a differential MMR to familiar and unfamiliar deviants in newborns (Beauchemin et al., 2011), to 3-month-old infants who are starting to interact more with their social surroundings supposedly based on forming more (social) long-term representations. Using a voice discrimination paradigm, each infant was repeatedly presented with the word “baby” (400 ms, interstimulus interval: 600 ms, 10 min overall duration) pronounced by three different female speakers. One voice that was unfamiliar to the infants served as the frequently presented “standard” stimulus, whereas another unfamiliar voice served as the “unfamiliar deviant” stimulus, and the voice of the infant’s mother served as the “familiar deviant.” Data collection was successful for 31 infants (mean age = 100 days). The MMR was determined by the difference between the ERP to standard stimuli and the ERP to the unfamiliar and familiar deviant, respectively. The MMR to the familiar deviant (mother’s voice) was larger, i.e., more positive, than that to the unfamiliar deviant between 100 and 400 ms post-stimulus over the frontal and central cortex. However, a genuine MMR differentiating, as a positive deflection, between ERPs to familiar deviants and standard stimuli was only found in the 300–400 ms interval. On the other hand, a genuine MMR differentiating, as a negative deflection, between ERPs to unfamiliar deviants from ERPs to standard stimuli was revealed for the 200–300 ms post-stimulus interval. Overall results confirm a differential MMR response to unfamiliar and familiar deviants in 3-month-olds, with the earlier negative MMR to unfamiliar deviants likely reflecting change detection based on comparison processes in short-term memory, and the later positive MMR to familiar deviants reflecting subsequent long-term memory-based processing of stimulus relevance.
... However, all sounds to which the fetus is exposed are muffled; sounds below 300Hz may not be attenuated at all, while frequencies about 2000 Hz are generally audible. The masking of external sounds by internal sounds, that is the mother's heartbeat, digestion and, importantly, the mother's voice (Querleu et al., 1984), presumably explains why newborns can recognize their father's voice but prefer their mother's voice (Lee and Kisilevsky, 2014). Also, the exposure of preterm babies to outside intense low-frequency noise (50dB-80dB) in the NICU may block the ability to tune hair cells to the very specific prime frequencies of adjacent hair cells (Graven and Browne, 2008). ...
... As a result, the fine-grained acoustic details of the speech signal, necessary for the identification of phonemes and words are mostly suppressed, whereas most of prosody is preserved and reaches the fetus, who can readily learn from it. For instance, fetuses at 38 weeks of gestation have been shown to increase their heart rate when they hear their mother's voice as compared to a stranger's voice (Kisilevsky et al., 2009;Lee & Kisilevsky, 2014). Maternal language heard prenatally also shapes newborns' perceptual preferences at birth. ...
Article
Full-text available
Prosody is the fundamental organizing principle of spoken language, carrying lexical, morphosyntactic, and pragmatic information. It, therefore, provides highly relevant input for language development. Are infants sensitive to this important aspect of spoken language early on? In this study, we asked whether infants are able to discriminate well‐formed utterance‐level prosodic contours from ill‐formed, backward prosodic contours at birth. This deviant prosodic contour was obtained by time‐reversing the original one, and super‐imposing it on the otherwise intact segmental information. The resulting backward prosodic contour was thus unfamiliar to the infants and ill‐formed in French. We used near‐infrared spectroscopy (NIRS) in 1‐3‐day‐old French newborns (n = 25) to measure their brain responses to well‐formed contours as standards and their backward prosody counterparts as deviants in the frontal, temporal and parietal areas bilaterally. A cluster‐based permutation test revealed greater responses to the Deviant than to the Standard condition in right temporal areas. These results suggest that newborns are already capable of detecting utterance‐level prosodic violations at birth, a key ability for breaking into the native language, and that this ability is supported by brain areas similar to those in adults. This article is protected by copyright. All rights reserved
... The underlying pain-reducing mechanisms of direct breastfeeding are not fully elucidated. However, given the inconsistent effect of breastmilk provided independent of maternal contact, it is hypothesized that synergistic benefits of maternal closeness and SSC [94], maternal odor [95], auditory recognition [96], and sucking [97], contribute to the effectiveness of breastfeeding as a multi-modality pain-reducing intervention. ...
Article
Full-text available
Infants born preterm are at a high risk for repeated pain exposure in early life. Despite valid tools to assess pain in non-verbal infants and effective interventions to reduce pain associated with medical procedures required as part of their care, many infants receive little to no pain-relieving interventions. Moreover, parents remain significantly underutilized in provision of pain-relieving interventions, despite the known benefit of their involvement. This narrative review provides an overview of the consequences of early exposure to untreated pain in preterm infants, recommendations for a standardized approach to pain assessment in preterm infants, effectiveness of non-pharmacologic and pharmacologic pain-relieving interventions, and suggestions for greater active engagement of parents in the pain care for their preterm infant.
... Je te l'avais promis Pourtant, cette capacité de traiter avec précision les voix entendues est cruciale chez l'être humain dès sa naissance. Plusieurs études ont en effet rapporté des réactions spécifiques à la voix de la mère chez les nouveau-nés, mais également chez des foetus en stade avancé de développement (DeCasper et Fifer, 1980;deRegnier, Nelson, Thomas, Wewerka et Georgieff, 2000;Hepper, Scott et Shahidullah, 1993;Kisilevsky et al., 2009;Kisilevsky et al., 2003;Lee et Kisilevsky, 2014;Mehler, Bertoncini, Barriere et Jassik-Gerschenfeld, 1978;Moon et Fifer, 1990). ...
Thesis
Full-text available
La capacité humaine de reconnaitre et d’identifier de nombreux individus uniquement grâce à leur voix est unique et peut s’avérer cruciale pour certaines enquêtes. La méconnaissance de cette capacité jette cependant de l’ombre sur les applications dites « légales » de la phonétique. Le travail de thèse présenté ici a comme objectif principal de mieux définir les différents processus liés au traitement des voix dans le cerveau et les paramètres affectant ce traitement. Dans une première expérience, les potentiels évoqués (PÉs) ont été utilisés pour démontrer que les voix intimement familières sont traitées différemment des voix inconnues, même si ces dernières sont fréquemment répétées. Cette expérience a également permis de mieux définir les notions de reconnaissance et d’identification de la voix et les processus qui leur sont associés (respectivement les composantes P2 et LPC). Aussi, une distinction importante entre la reconnaissance de voix intimement familières (P2) et inconnues, mais répétées (N250) a été observée. En plus d’apporter des clarifications terminologiques plus-que-nécessaires, cette première étude est la première à distinguer clairement la reconnaissance et l’identification de locuteurs en termes de PÉs. Cette contribution est majeure, tout particulièrement en ce qui a trait aux applications légales qu’elle recèle. Une seconde expérience s’est concentrée sur l’effet des modalités d’apprentissage sur l’identification de voix apprises. Plus spécifiquement, les PÉs ont été analysés suite à la présentation de voix apprises à l’aide des modalités auditive, audiovisuelle et audiovisuelle interactive. Si les mêmes composantes (P2 et LPC) ont été observées pour les trois conditions d’apprentissage, l’étendue de ces réponses variait. L’analyse des composantes impliquées a révélé un « effet d’ombrage du visage » (face overshadowing effect, FOE) tel qu’illustré par une réponse atténuée suite à la présentation de voix apprise à l’aide d’information audiovisuelle par rapport celles apprises avec dans la condition audio seulement. La simulation d’interaction à l’apprentissage à quant à elle provoqué une réponse plus importante sur la LPC en comparaison avec la condition audiovisuelle passive. De manière générale, les données rapportées dans les expériences 1 et 2 sont congruentes et indiquent que la P2 et la LPC sont des marqueurs fiables des processus de reconnaissance et d’identification de locuteurs. Les implications fondamentales et en phonétique légale seront discutées. The human ability to recognize and identify speakers by their voices is unique and can be critical in criminal investigations. However, the lack of knowledge on the working of this capacity overshadows its application in the field of “forensic phonetics”. The main objective of this thesis is to characterize the processing of voices in the human brain and the parameters that influence it. In a first experiment, event related potentials (ERPs) were used to establish that intimately familiar voices are processed differently from unknown voices, even when the latter are repeated. This experiment also served to establish a clear distinction between neural components of speaker recognition and identification supported by corresponding ERP components (respectively the P2 and the LPC). An essential contrast between the processes underlying the recognition of intimately familiar voices (P2) and that of unknown but previously heard voices (N250) was also observed. In addition to clarifying the terminology of voice processing, the first study in this thesis is the first to unambiguously distinguish between speaker recognition and identification in terms of ERPs. This contribution is major, especially when it comes to applications of voice processing in forensic phonetics. A second experiment focused more specifically on the effects of learning modalities on later speaker identification. ERPs to trained voices were analysed along with behavioral responses of speaker identification following a learning phase where participants were trained on voices in three modalities : audio only, audiovisual and audiovisual interactive. Although the ERP responses for the trained voices showed effects on the same components (P2 and LPC) across the three training conditions, the range of these responses varied. The analysis of these components first revealed a face overshadowing effect (FOE) resulting in an impaired encoding of voice information. This well documented effect resulted in a smaller LPC for the audiovisual condition compared to the audio only condition. However, effects of the audiovisual interactive condition appeared to minimize this FOE when compared to the passive audiovisual condition. Overall, the data presented in both experiments is generally congruent and indicate that the P2 and the LPC are reliable electrophysiological markers of speaker recognition and identification. The implications of these findings for current voice processing models and for the field of forensic phonetics are discussed.
... Cependant, lorsqu'une familiarisation intensive avec la voix du père est réalisée, les mêmes résultats que ceux pour la voix de la mère sont observés (Lee & Kisilevsky, 2014). Plus généralement, il a été avancé que les foetus peuvent reconnaitre les voix familières et percevoir les informations prosodiques ou rythmiques suite à une exposition répétée. ...
... The first sounds the fetus hears relate to social activities and voices, with maternal voice being dominant (Moon, 2011). Full-term newborns demonstrate a preference for their mother's voice (DeCasper & Fifer, 1980;Lee & Kisilevsky, 2014) and for infantdirected speech (Cooper & Aslin, 1990), which has implications for the role that parental voice plays. Parental voice provides a critical orienting link between the fetal auditory environment and the auditory environment experienced in the NICU (Loewy et al., 2013). ...
Article
Full-text available
Introduction: Despite medical advances, preterm birth and neonatal intensive care (NICU) hospitalization are demanding and pose risks for infants and parents. Various music therapy (MT) models have suggested parental singing to promote healthy bonding and development in premature infants, but evidence on long-term effects is lacking. Method: We present the theoretical framework and intervention protocol of a resource-oriented MT approach for premature infants and their caregivers used in the international LongSTEP trial (ClinicalTrials.gov NCT03564184). We illustrate how guiding principles manifest in MT sessions, describe frames for phases of intervention, discuss prerequisites and present hypothesized mechanisms of change. Results: The LongSTEP MT approach is resource-oriented, emphasizes parental voice and parent-infant mutual regulation, builds on family-centered care principles, and is relevant in the NICU and beyond. Essential elements include: observation and dialogue on infant and parent needs; voice as the main musical source, with parental voice as the most prominent; active parental participation; modification of music in response to infant states and cues; and integration of the family’s culture and music preferences. The music therapist facilitates and supports interaction between parents and infant. Parents learn how to adapt principles in relation to infant development across NICU hospitalization and post-discharge phases. Discussion: The LongSTEP approach is feasible in culturally diverse countries where consistent parental presence is available, but requires tailoring to local circumstances and culture, particularly in the post-discharge phase. The emphasis on parent-led infant-directed singing places a higher demand on parents than other MT approaches, and requires sufficient psychosocial and musical support for parents.
... William Fifer demonstrated that newborns, unlike any other age group, learn while they are asleep (6,7). In utero, we know that infants learn to recognize their mother's voice (8) and smell; we also know that the fetus sleeps most of the time, so these stimuli are likely presented and learned for many hours every day, much of that time while the fetus is asleep. ...
... For instance, divergent discrimination responses to rhythmically similar versus different languages may be neither necessary nor sufficient for language learning. Alternatives include that this difference in behavior is an acquired response bias; or a side effect of auditory development as affected by ambient sounds (similarly to how infants prefer their mother's voice but not their father's voice at birth, Lee and Kisilevsky 2014), in other words behaviors that emerge but that are neither necessary nor sufficient for language learning. If such a task is used to evaluate the AI learner, observations in humans and machines may be divergent for uninteresting reasons. ...
Preprint
Language use in everyday life can be studied using light-weight, wearable recorders that collect long-form recordings, i.e. audio (including speech) over whole days. We first place this technique into the broader context of the current ways of studying both the input being received by children as well as children’s own language production, laying out the main advantages and drawbacks of long-form recordings. We then go on to argue that a unique advantage of long-form recordings is that they can fuel realistic models of early language acquisition that use speech for representing children’s input and/or for establishing production benchmarks. To enable the field to make the most of this unique empirical and conceptual contribution, we outline what this reverse-engineering approach from long-form recordings entails, why it is useful, and howto evaluate success.
... Using measures of heart rate variability, several studies have provided evidence that near-term fetuses in utero are sensitive to familiar voices (Lee & Kisilevsky, 2014). At birth, infants prefer to orient toward their mother's voice (DeCasper & Fifer, 1980). ...
... De nombreuses recherches ont montré, à l'aide de capteurs du rythme cardiaque, que les foetus, durant le dernier trimestre de grossesse, sont capables de différencier la musique de la parole (Granier-Deferre, Bassereau, Ribeiro, Jacquet & Decasper, 2011 ;Kisilevsky, Hains, Jacquet, Granier-Deferre & Lecanuet, 2004), leur langue maternelle d'une langue inconnue (Kisilevsky et al., 2009), la voix de leur mère de celle d'une locutrice étrangère (Kisilevsky et al., 2003), et de reconnaître la voix du père, même si on observe un biais pour la voix maternelle (Lee & Kisilevsky, 2014). Ces études mettent en avant que, dès le début du troisième trimestre de grossesse, le système auditif est assez mature pour distinguer, notamment sur la base des indices prosodiques, deux types de sons avec une préférence pour les stimuli familiers (Lecanuet & Granier-Deferre, 1993 ;Lecanuet, 1997Lecanuet, , 2000. ...
Article
La musique et la parole sont des signaux sonores complexes, basés sur les mêmes configurations acoustiques que sont la durée, l'intensité et la hauteur, qui suivent plusieurs niveaux d'organisation : la morphologie, la phonologie, la sémantique, la syntaxe et la pragmatique pour la parole ; le rythme, la mélodie, et l'harmonie pour la musique. L'une des composantes les plus saillantes de la musique est sa dimension mélodique, résultant d'un ensemble de variations de « hauteur » sonore-corrélat perceptif de la fréquence-intervenant au fur et à mesure qu'un morceau se déroule. De même, pour la parole, l'une des composantes les plus saillantes est la mélodie qui, combinée au tempo et au timbre de la voix, forme une véritable partition musicale. En nous appuyant sur les données de la littéra-ture, nous nous demanderons dans quelle mesure ces deux systèmes de communication, parole et musique, s'appuient sur des phénomènes proso-diques communs, partagés ou distincts que perçoit le bébé dans le milieu utérin et au cours de son développement. Dès le 3 e trimestre de grossesse, le foetus est déjà capable de percevoir des rythmes qui reposent sur une organisation temporelle très régulière s'apparentant à ceux de la musique. Ensuite, le nouveau-né présente des capacités de perception de la parole relatives à des indices communs à la musique tels que l'accentuation, le rythme, le débit et les pauses. Parallèlement, le langage que les adultes adressent au bébé aide le nourrisson non seulement à parfaire ses connais-sances sur les formes prosodiques du babillage, des mots et des phrases de sa langue maternelle mais aussi à exprimer ses émotions dans les aspects pragmatiques du langage.
... The agent must have a positive voice that inspires trustworthiness, competence and warmth. After a preliminary analysis of the gender dimension, we chose to design the therapeutic agent with a female voice for several reasons: (i) it is perceived as helping, not commanding [26]; (ii) we are biologically set from intrauterine life to prefer the female voice, to identify the mother's voice, not necessarily that of the father [27]; (iii) the female voice is clearer and more melodious, having a calming and soothing effect (the processing of the female voice is done in the same auditory area dedicated to musical information) [28] (iv) the female voice offers greater confidence than the male voice due to a higher pitch [29]. By default, the female avatar has a normal, neutral voice in terms of voice parameterspitch, tempo and volume. ...
Chapter
This paper presents the design, a pilot implementation and validation of eTher, an assistive virtual agent for acrophobia therapy in a Virtual Reality environment that depicts a mountain landscape and contains a ride by cable car. eTher acts as a virtual therapist, offering support and encouragement to the patient. It directly interacts with the user and changes its voice parameters – pitch, tempo and volume – according to the patient’s emotional state. eTher identifies the levels of relaxation/anxiety compared to a baseline resting recording and provides three modalities of relaxation - by determining the user to look at a favorite picture, listen to an enjoyable song or read an inspirational quote. If the relaxation modalities fail to be effective, the virtual agent automatically lowers the level of exposure. We have validated our approach with a number of 10 users who played the game once without eTher’s intervention and three times with assistance from eTher. The results showed that the participants succeeded to finish the game quicker in the last gameplay session where the virtual agent intervened. Moreover, their biophysical data showed significant improvements in terms of relaxation state.
... Die Musiktherapeutin erklärt, dass auch wenn Jan im Moment noch nicht aktiv auf ihre Stimmen reagieren kann, es dennoch wertvoll für ihn ist, ihre Stimmen zu hören. Er kennt die Stimmen seiner Eltern aus dem Mutterleib (Lee & Kisilevsky, 2013) und bekannte Reize können ihm Sicherheit und Geborgenheit in dieser lauten, Technik-dominierten Umgebung vermitteln. Die Eltern werden im ersten Schritt bestärkt, ihre bisher geflüsterten Botschaften für ihr Kind hörbar zu sprechen. ...
Article
Hintergrund: Neugeborene mit Zwerchfellhernie (CDH) verbringen ihre ersten Lebenswochen auf der Intensivstation, was für sie und ihre Familien eine große Belastung ist. Musiktherapie wird bereits bei Frühgeborenen auf der Neonatologie zur Stabilisierung des Kindes, zur Entlastung der Eltern und zur Stärkung der Eltern-Kind-Bindung eingesetzt. Der Nutzen für intensivmedizinisch versorgte reife Neugeborene und ihre Familien wurde dagegen bisher nicht umfassend untersucht. Ziel: Ziel dieser Arbeit war die Untersuchung der spezifischen Bedürfnisse, Herausforderungen und Erfahrungen von Kindern mit CDH und ihrer Eltern und die Ableitung passender musiktherapeutischer Interventionen. Methoden: Mithilfe der QDA-Software f4analyse wurden 15 Elternberichte mit einer inhaltlich strukturierenden qualitativen Inhaltsanalyse ausgewertet und anschließend die musiktherapeutischen Aspekte an einem hypothetischen Fallbeispiel verdeutlicht. Ergebnisse: Eltern leiden vor allem unter organisatorischen und emotionalen Herausforderungen. Sie möchten für ihr Kind sorgen und suchen seine Nähe. Unterstützt werden sie durch ihr soziales Umfeld und das medizinische Personal. Protektive Faktoren sind eine gelungene Selbstfürsorge, Abgrenzung und das Vertrauen auf einen positiven Verlauf. Schlussfolgerungen: Im Mittelpunkt der Bedürfnisse und Herausforderungen stehen medizinische Aspekte wie der Entzug, die Eltern-Kind-Bindung und das elterliche Wohlbefinden. In der musiktherapeutischen Literatur finden sich Hinweise auf verschiedene Interventionen, die diese Aspekte ansprechen können.
... Because of high relevance of this information for survival and communication success (Sidtis & Kreiman, 2012), we are tuned to attend to voice-identity information already before and right after birth (DeCasper & Fifer, 1980;Kisilevsky et al., 2003). Infants show a preference for their mother's voice in comparison to other unfamiliar female voices or to the voice of their father (Lee & Kisilevsky et al., 2014;Ockleford et al., 1988). ...
Thesis
Kommunikation ist allgegenwärtig in unserem Alltag. Personen mit einer Autismus-Spektrum-Störung (ASS) zeigen soziale Schwierigkeiten und beim Erkennen von Kommunikationssignalen von Gesicht und Stimme. Da derartige Schwierigkeiten die Lebensqualität beeinträchtigen können, ist ein tiefgreifendes Verständnis der zugrundeliegenden Mechanismen von großer Bedeutung. In der vorliegenden Dissertation befasste ich mich mit sensorischen Gehirnmechanismen, die der Verarbeitung von Kommunikationssignalen zugrunde liegen und, die in der Forschung zu ASS bisher wenig Beachtung fanden. Erstens untersuchte ich, ob eine intranasale Gabe von Oxytocin die Erkennung der Stimmenidentität beeinflussen, und ihre Auffälligkeiten bei Personen mit ASS mildern kann. Zweitens erforschte ich, welche neuronalen Prozesse den Schwierigkeiten in der Wahrnehmung visueller Sprache in ASS zugrunde liegen, da bisherige Evidenz nur auf Verhaltensdaten basierte. Diese Fragestellungen beantwortete ich mit Hilfe von funktioneller Magnetresonanztomographie, Eyetracking und Verhaltenstestungen. Die Ergebnisse der Dissertation liefern neuartige Erkenntnisse, die für Personen mit ASS und typisch entwickelte Personen von hoher Relevanz sind. Erstens bestätigen sie die Annahmen, dass atypische sensorische Mechanismen für unser Verständnis der sozialen Schwierigkeiten in ASS grundlegend sind. Sie zeigen, dass atypische Funktionen sensorischer Gehirnregionen den Kommunikationseinschränkungen in ASS zugrunde liegen und die Effektivität von Interventionen beeinflussen, die jene Schwierigkeiten vermindern sollen. Zweitens liefern die Ergebnisse empirische Evidenz für theoretische Annahmen darüber, wie das typisch entwickelte Gehirn visuelle Kommunikationssignale verarbeitet. Diese Erkenntnisse erweitern maßgeblich unser aktuelles Wissen und zukünftige Forschungsansätze zur zwischenmenschlichen Kommunikation. Außerdem können sie neue Interventionsansätze zur Förderung von Kommunikationsfähigkeiten hervorbringen.
... Recognition of mothers' voices has been demonstrated as early as the third trimester (Kisilevsky et al., 2009). Exposure to mothers' voices in utero may explain why newborns prefer their mothers' voices even compared with their fathers' voices (Lee & Kisilevsky, 2013), and infants as young as 7 months can distinguish their own mothers' voices through background noise better than an unfamiliar female voice (Barker & Newman, 2004). Furthermore, neural activity in infants (Imafuku, Hakuno, Uchida-Ota, Yamamoto, & Minagawa, 2014;Naoi et al., 2012) and children (Abrams et al., 2016;Liu et al., 2019) differentiates the sound of their own mothers' voices com-pared with an unfamiliar female voice. ...
Article
The ability to interpret others’ emotions is a critical skill for children’s socioemotional functioning. While research has emphasized facial emotion expressions, children are also constantly required to interpret vocal emotion expressed at or around them by individuals who are both familiar and unfamiliar to them. The present study examined how speaker familiarity, specific emotions, and the acoustic properties that comprise affective prosody influenced children’s interpretations of emotional intensity. Participants were 51 7- and 8-year-olds presented with speech stimuli spoken in happy, angry, sad, and non-emotional prosodies by both the child’s mother and another child’s mother, unfamiliar to the target child. Analyses indicated that children rated their own mothers as more intensely emotional compared to the unfamiliar mothers, and that this effect was specific to angry and happy prosodies. Further, the acoustic properties predicted children’s emotional intensity ratings in different patterns for each emotion. The results are discussed in terms of the significance of the mother’s voice in children’s development of emotional understanding.
... From approximately 26 weeks' gestation, fetuses or preterm infants will have the capacity to react to auditory stimuli. Sounds a fetus hears within the womb include a mother's heartbeat, respiration and the maternal voice, and show recognition to the mother's voice, and in some circumstances, father's voice as well (Lee and Kisilevsky, 2014). From 30 weeks' gestation onward, the infant is able to distinguish between varying speech tones and timbres and is also able to process complex auditory sounds. ...
Article
Full-text available
Neonatal brain development relies on a combination of critical factors inclusive of genetic predisposition, attachment, and the conditions of the pre and postneonatal environment. The status of the infant’s developing brain in its most vulnerable state and the impact that physiological elements of music, silences and sounds may make in the earliest stages of brain development can enhance vitality. However, little attention has been focused on the integral aspects of the music itself. This article will support research that has hypothesized conditions of music therapeutic applications in an effort to further validate models of neurobehavioral care that have optimized conditions for growth, inclusive of recommendations leading toward the enhancement of self-regulatory behaviors.
... Timbre-centeredness distinguishes the vocalizations of newborns: crying, screaming, rasping, grunting, whining, sobbing, whimpering, etc. (Loewy, 1995). Even prenatally, fetuses recognize parental voices (Lee and Kisilevsky, 2014), which involves at least some timbral discrimination. Newborns distinguish different musical instruments by timbre (McAdams and Bertoncini, 1997). ...
Article
Full-text available
The current scientific research into music has been skewed in favor of its frequency-based variety prevalent in the West. However, its alternative, the timbre-based music, native to the Northeast, seems to represent an earlier evolutionary development. Western researchers commonly interpret such timbre-based music as a “defective” rendition of frequency-based music. They often regard pitch as the structural criterion that distinguishes music from non-music. We would like to present evidence to the contrary—in support of the existence of indigenous music systems based on the discretization and patterning of aspects of timbre, rather than pitch. Such music is distinguished by its personal orientation: for oneself and/or for close relatives/friends. Collective music-making is rare and exceptional because of the deeply rooted institute of “personal song” - a system of personal identification through individualized patterns of rhythm, timbre, and pitch contour—whose sound enables the recognition of a particular individual.
... La percezione uditiva, infatti, inizia già durante la gestazione: l'orecchio del feto può trasmettere il suono a partire dalla 25esima settimana, capacità che progredisce al punto che il nascituro poco prima del parto è in grado di riconoscere la voce della madre e di discriminare le differenze relative al genere del parlante, la lingua a cui è stato esposto rispetto a una lingua ritmicamente diversa, e di discriminare contrasti di segmenti vocalici e di strutture sillabiche (Zmarich et al., 2014). Nonostante la voce materna sia la più saliente da un punto di vista percettivo, il feto riesce a riconoscere anche il parlato prodotto da altre persone, ad esempio il padre, dimostrando l'esistenza di notevoli capacità prenatali di percezione e memoria (Lee e Kisilevsky, 2014). Gli ambienti uditivi prenatali e postnatali sono diversi dal momento che il liquido amniotico attenua le frequenze più alte: infatti, nonostante il segnale acustico relativo al parlato raggiunga i 10.000 Hz, l'ambiente uterino smorza le frequenze superiori ai 500 Hz (Gerhardt e Abrams, 1996). ...
Chapter
Full-text available
Con ogni probabilità la situazione più frequente nel mondo odierno vede un bambino esposto nel corso dei suoi primissimi anni di vita a più di una lingua, come è sempre più frequente nel mondo di cultura inglese (Hambly et al., 2013) e anche in Italia (Galatà e Zmarich, 2011). Il bilinguismo è così stimolante anche dal punto di vista scientifico, che ha portato i più importanti teorici dello sviluppo della percezione linguistica a riformulare recentemente i lodo modelli che erano stati elaborati sui monolingui (PRIMIR: Curtin, Byers-Heinlein e Werker, 2011; NLM-e: Kuhl et al., 2009; PAM-AOH: Best e McRoberts, 2003). Metteremo a fuoco soprattutto il "bilinguismo precoce", che si ha quando il bambino è esposto regolarmente ad almeno 2 lingue dalla nascita ai sei anni, (per l’"apprendimento precoce", in cui il bambino è esposto regolarmente ad almeno 2 lingue da un anno e mezzo ai quattro anni, e per l’"acquisizione di lingua seconda" dai 4 anni in poi si rimanda a De Houwer, 2009). Tradizionalmente si riteneva che il bilinguismo costituisse un rischio per l'acquisizione corretta del linguaggio, ma più recentemente sono stati condotti studi che ne hanno evidenziato caratteristiche favorevoli. Goldstein e McLeod (2012) si riferiscono a questi due aspetti con il termine rispettivamente di "transfer positivo", che si verifica quando i bambini bilingui sono più avanzati nel loro sviluppo rispetto ai coetanei monolingui, e di "transfer negativo", quando occorre il contrario. Una recente rassegna (Hambly et al., 2013) sulla produzione fonetico-fonologica dei bambini bilingui ha selezionato 53 studi, quasi tutti su campioni ridotti di soggetti. Il confronto tra monolingui e bilingui di solito verifica la dimensione degli inventari fonetici, le percentuali di correttezza dei foni e la tipologia degli errori. Pur fornendo un immagine complessa, i risultati autorizzano ad affermare che i monolingui e i bilingui tendono a mostrare caratteristiche di sviluppo diverse. Per esempio, i fonemi nei bilingui emergono più tardi (a 20 mesi invece che a 17 mesi come nei monolingui). I bilingui sono inoltre inferiori ai monolingui in tutti quei compiti in cui la dimensione del vocabolario è una variabile importante (poiché il vocabolario che possiedono per ciascuna delle loro lingue è minore di quello dei monolingui, Curtin, Byers-Heinlein e Werker, 2011). I bambini bilingui evidenziano errori che sono atipici nello sviluppo normale, in una e\o ambedue le lingue, come pure una frequenza inusualmente alta di errori tipici dello sviluppo normale ad un'età più avanzata di quella dei coetanei monolingui. Gli autori citati da Hambly et al. (2013), a cui si rimanda, attribuiscono questi errori ad un'interazione problematica tra strutture lessicali e caratteristiche fonologiche delle specifiche coppie di lingue in gioco in ciascun gruppo di soggetti. Vi sono prove di un transfer dalla lingua dominante (L1) alla lingua acquisita più tardi (L2), ma anche di un transfer di direzione contraria (per es. negli studi sul VOT, cfr. Simon, 2010; Yavas, 2002). In base alle osservazioni che gli stessi fonemi condivisi nelle due lingue sono acquisiti in età diverse e alla diversità dei processi fonologici, la maggioranza degli studi citati sembra propendere per l'esistenza di due sistemi fonologici separati ma interagenti. Dei 10 studi che confrontano inventari fonetici e pattern di errore, 8 riguardano bambini bilingui inglese-spagnolo. Nell'insieme i dati suggeriscono che una delle due lingue, nel loro caso l'inglese, era affetta più negativamente dello spagnolo. Quando però sono state controllate sia la frequenza d'uso che la capacità nelle due lingue, si è riscontrato un transfer positivo in inglese per i bambini di 5-6 anni (Goldstein & Bunta, 2010, un risultato compatibile con l'ipotesi che il transfer positivo sia più evidente ad età maggiori, data la durata dell'esposizione ad una seconda lingua. Se questa ipotesi fosse confermata, il periodo critico per l'acquisizione bilingue potrebbe essere più lungo di quello dei monolingui. Al momento non ci sono prove convincenti che i bilingui sviluppino il linguaggio più lentamente rispetto ai monolingui, ma il loro sviluppo è qualitativamente diverso e più variabile, a causa dell'interferenza tra le strutture linguistiche e fonologiche delle due lingue (cfr. il modello dell'interdipendenza di Paradis e Genesee, 1996). La forte variabilità del transfer crosslinguistico dipende non solo dalle caratteristiche linguistiche delle due lingue in gioco (per es. la complessità articolatoria dei fonemi, la loro frequenza d'uso e il loro carico funzionale, Stokes e Surendra, 2005; Ingram, 2012), ma anche da caratteristiche idosincratiche come la dominanza di una lingua sull'altra, l'età di acquisizione di L2 e le abilità linguistiche di tipo generale (Flege, 2007), insieme con le strategie di interazione personale (Schnitzer e Krasinski, 1996). Secondo Fabiano-Smith e Goldstein (2010), che trovarono prove di un'acquisizione accelerata dei foni condivisi tra inglese e spagnolo nei bambini bilingui, laddove i bambini bilingui mostrano una velocità di acquisizione fonetica uguale a quella dei monolingui (anche in presenza di qualche differenza qualitativa), questa può essere interpretata alla stregua di un transfer positivo, dato l'impegno aggiuntivo che i bambini che acquisiscono due lingue devono affrontare.
... Term newborns' preference for the mother's voice compared to an unfamiliar female voice (DeCasper & Fifer, 1980) or to the father's voice (Lee & Kisilevsky, 2014), provides evidence of an experience-based prenatal auditory memory (DeCasper & Spence, 1986;DeCasper, Lecanuet, Busnel, Granier-Deferre, & Maugeais, 1994). Fetal cardiac response to the mother's voice has been observed at 32-34 weeks of gestation (Kisilevsky & Hains, 2011) and activation of the lower bank of the left temporal lobe has been observed through fMRI measures at 34 weeks of gestation (Jardri et al., 2012). ...
... L'information de parole n'est donc pas transmise dans toute son intégralité et ce point tient une grande importance dans notre recherche. (Kisilevsky et al., 2009), la voix de leur mère de celle d'une étrangère (Kisilevsky et al., 2003), et, très récemment, de reconnaitre la voix du père, toujours en préférant la voix maternelle (Lee & Kisilevsky, 2014). Ces études suggèrent que, dès le début du troisième trimestre de grossesse, le système auditif est assez mature pour distinguer, notamment sur la base des indices prosodiques, deux types de sons avec une préférence pour les stimuli familiers. ...
Thesis
Le but de cette thèse a été d'explorer certains des mécanismes relatifs à la perception de la prosodie linguistique. Cet intérêt provient du fait que la prosodie, qui est portée dans le signal acoustique par les indices d'intensité, durée et pitch, pourrait guider les nourrissons dans l'apprentissage des mots et de leur ordre dans les phrases. Nous avons donc étudié un mécanisme qui pourrait sous-tendre ces capacités prosodiques précoces : la loi iambo-trochaïque (iambic-trochaic law : ITL). L'ITL (Woodrow, 1909; Hayes, 1995; Nespor et al., 2008) est un des principes d'organisation de la perception auditive qui a été proposé comme ayant un rôle important non seulement dans la perception chez l'adulte, mais aussi pendant l'acquisition du langage chez le nourrisson. L'ITL postule une préférence générale pour le regroupement des éléments sonores (syllabes, notes musicales...) en paires, en fonction des variations d'intensité, de tonalité (pitch) ou de durée de ces éléments : les sons forts ou aigus marquent le début de groupes de pattern trochaïque (fort-faible ou aigu-grave), alors que les sons longs marquent la fin de groupes de pattern iambique (court-long). Toutefois, les langues diffèrent dans leur façon d'utiliser ces indices acoustiques pour marquer la proéminence prosodique dans les mots et dans les syntagmes. Par exemple, le français ne possède pas d'accent lexical mais un accent au niveau du syntagme où la dernière syllabe est allongée, ce qui crée un contraste court-long (pattern iambique basé sur la durée). A l'inverse, en anglais ou en allemand, l'accent lexical est en général sur la première syllabe, qui est plus forte et/ou plus aigüe, ce qui induit un pattern trochaïque basé sur l'intensité ou le pitch. L'environnement linguistique pourrait donc interagir avec l'ITL. Cette thèse s'articule en deux axes. D'une part, dans une tâche de segmentation / reconnaissance de paires de syllabes, nos données montrent que des sensibilités à l'ITL sont présentes chez l'adulte (Exp. 1) et le nourrisson de 7,5 mois (Exp. 2), français et allemands. De faibles différences cross-linguistiques entre les deux groupes sont retrouvées pour le groupement basé sur l'intensité. D'autre part, à l'aide de la NIRS (Near InfraRed Spectroscopy), nous avons mesuré les réponses d'activité cérébrale chez des nouveau-nés et mis en évidence que des sensibilités à l'ITL sont présentes à la naissance et déjà influencées par la langue entendue in utero (Exp. 3-6). Pour finir, nous avons exploré de quelle manière l'environnement linguistique influence les compétences de discrimination de patterns rythmiques d'accentuation des mots chez des nourrissons de 10 mois (Exp. 7). Nos résultats révèlent que des nourrissons bilingues apprenant le français et une langue avec un accent lexical variable sont capables de discriminer des patterns d'accentuation contrairement aux monolingues français. Pris ensemble, nos résultats contribuent à la compréhension de l'origine développementale de l'ITL, de sa modulation par l'environnement linguistique et du développement langue-spécifique du traitement et des préférences rythmiques.
... Newborn infants recognize specific phrases that the mother has frequently repeated during pregnancy-even when those phrases are produced by an unfamiliar female (DeCasper & Spence, 1986). While the mother's voice is the loudest, and therefore most salient, fetuses can also hear and learn from speech produced by other speakers (e.g., their father; Lee & Kisilevsky, 2013) and speech repeated via audio recordings (Partanen et al., 2013). Such findings demonstrate remarkable prenatal perceptual and memory capacities. ...
Article
Over the first weeks and months following birth, infants' initial, broad-based perceptual sensitivities become honed to the characteristics of their native language. In this article, we review this process of emerging specialization within the context of a cascading “critical period” (CP) framework, in which periods of maximal openness to experience of different aspects of language occur at sequential, overlapping points in development. Importantly, as infants' experience of speech is not limited to auditory signals, but is informed by—for example—their experience of talking faces and their own oral motor movements, we review the trajectory of perceptual specialization in multisensory language processing. Throughout, we highlight the impact of increasing perceptual specialization on later language outcomes (e.g., word learning, foundations of syntax, literacy), and consider how the outcomes can be compromised if/when the timing of perceptual specialization has been perturbed.
... The preference is not simply for the maternal voice but rather for the filtered, low-keyed maternal timbre as heard from within the womb (Fifer and Moon 1995). Other voices too may become significant as evidenced by a newborn's preference for a father's voice that was heard while in the womb (Lee and Kisilevsky 2014). The question becomes: are these voices appreciated simply because of repeated exposure or in part because of the cohesion of maternal and fetal intercorporeality? ...
Article
Full-text available
Within the mother’s womb, life finds its first stirrings. The womb shelters the fetus, the growing child within. We recognize the existential traces of a wombed existence when a newborn calms in response to being held; when a newborn stills in response to his or her mother’s heartbeat; and, when a newborn startles in the presence of bright light. Yet, how does experiential human life begin within another human being? What are the conditions and paths of becoming for the fetus within the womb? And for the child born early, what “womb” welcomes the premature child in neonatal intensive care?
... (DeCasper and Fiefer, 1980;Fernald, 1985;Purhonen et al., 2004). It was also revealed that in the uterus, foetuses present a preference for their mother's voices (Kisilevsky et al., 2003(Kisilevsky et al., , 2009Lee and Kisilevsky, 2014). ...
Article
In the 20th century, mother-infant separation shortly after birth in hospitals became routine and unique to humans. However, this hospital birth practice is different from the practice in our evolutionary history, where newborn survival depended on close and essentially continuous maternal contact. This time shortly after birth represents a psychophysiologically sensitive or critical period for programming future physiology and behaviour. We hypothesize that early maternal separation as conducted in conventional hospital practice may induce similar epigenetic changes similar to those found in various mental diseases that may also be implicated in neurodevelopmental disorders.
Preprint
Full-text available
Background Although exclusive breastfeeding is recommended for the first 6 months of life, breastfeeding rates in most developed countries are low. Sensory responsiveness has been found to interfere with infant and childcare, development, and routines, but have not yet been examined as breastfeeding barrier. The aim of this study was to explore the association between infant sensory responsiveness and exclusive breastfeeding and whether it can predict exclusive breastfeeding cessation prior to 6 months of age. Methods In this cohort prospective study participants were 164 mothers and their infants recruited 2 days after birth in a maternity ward between June 2019 and January 2021. At this time, participating mothers completed a demographic and delivery information questionnaire. At 6 weeks after birth, the mothers completed the Infant Sensory Profile2 (ISP2), reporting their infants’ sensory responsiveness in daily activities. At 6 months, infants' sensory responsiveness was assessed using the Test-of Sensory-Functions-in-Infants (TSFI) and the Bayley-Scales-of-Infant-and-Toddler-Development-3rd -Edition was administered. Additionally, mothers provided information about their breastfeeding status and were divided into two groups accordingly: Exclusive breastfeeding (EBF) and non-exclusive breastfeeding (NEBF). Results The incidence of atypical sensory responsiveness (mostly of the sensory over-responsivity type) at 6 weeks was twice as high among NEBF infants than EBF infants (36.2% vs. 17%, χ² = 7.41, p = .006). Significant group differences were found in the ISP2-touch section (F = 10.22, p = .002). In addition, NEBF infants displayed more sensory over-responsivity behaviors than EBF infants in the TSFI-deep touch (F = 2.916, p = .001) and tactile integration subtests (F = 3.095, p < .001), and had lower scores in the adaptive motor functions subtest (F = 2.443, p = .013). Logistic regression modeling revealed that ISP2 at 6 weeks (typical vs. atypical) and TSFI-total score at 6 months predicted 28% of NEBF at 6 months (χ² = 23.072, p = .010). Conclusions Infant atypical sensory responsiveness, predominantly of the sensory over-responsivity type, were found to predict NEBF at 6 months after birth. This study contributes to the understanding of exclusive breastfeeding barriers, highlighting the importance of early identification of sensory over-responsivity in infants. Findings may suggest developing early sensory interventions and providing individualized breastfeeding support tailored to the infant’s unique sensory profile.
Article
Purpose The study's primary aim was to investigate developmental changes in the perception of vocal loudness and voice quality in children 3–6 years of age. A second aim was to evaluate a testing procedure—the intermodal preferential looking paradigm (IPLP)—for the study of voice perception in young children. Method Participants were categorized in two age groups: 3- to 4-year-olds and 5- to 6-year-olds. Children were tested remotely via a Zoom appointment and completed two perceptual tasks: (a) voice discrimination and (b) voice identification. Each task consisted of two tests: a vocal loudness test and a voice quality test. Results Children in the 5- to 6-year-old group were significantly more accurate than children in the 3- to 4-year-old group in discriminating and identifying differences between voices for both loudness and voice quality. The IPLP, used in the identification task, was found to successfully detect differences between the age groups for overall accuracy and for most of the sublevels of vocal loudness and voice quality. Conclusions Results suggest that children's ability to discriminate and identify differences in vocal loudness and voice quality improves with age. Findings also support the use of the IPLP as a useful tool to study voice perception in young children.
Article
Voice timbre – the unique acoustic information in a voice by which its speaker can be recognized – is particularly critical in mother-infant interaction. Correct identification of vocal timbre is necessary in order for infants to recognize their mothers as familiar both before and after birth, providing a basis for social bonding between infant and mother. The exact mechanisms underlying infant voice recognition remain ambiguous and have predominantly been studied in terms of cognitive voice recognition abilities of the infant. Here, we show – for the first time – that caregivers actively maximize their chances of being correctly recognized by presenting more details of their vocal timbre through adjustments to their voices known as infant-directed speech (IDS) or baby talk, a vocal register which is wide-spread through most of the world’s cultures. Using acoustic modelling (k-means clustering of Mel Frequency Cepstral Coefficients) of IDS in comparison with adult-directed speech (ADS), we found in two cohorts of speakers - US English and Swiss German mothers - that voice timbre clusters of in IDS are significantly larger to comparable clusters in ADS. This effect leads to a more detailed representation of timbre in IDS with subsequent benefits for recognition. Critically, an automatic speaker identification using a Gaussian-mixture model based on Mel Frequency Cepstral Coefficients showed significantly better performance in two experiments when trained with IDS as opposed to ADS. We argue that IDS has evolved as part of an adaptive set of evolutionary strategies that serve to promote indexical signalling by caregivers to their offspring which thereby promote social bonding via voice and acquiring linguistic systems.
Article
Language use in everyday life can be studied using lightweight, wearable recorders that collect long-form recordings—that is, audio (including speech) over whole days. The hardware and software underlying this technique are increasingly accessible and inexpensive, and these data are revolutionizing the language acquisition field. We first place this technique into the broader context of the current ways of studying both the input being received by children and children's own language production, laying out the main advantages and drawbacks of long-form recordings. We then go on to argue that a unique advantage of long-form recordings is that they can fuel realistic models of early language acquisition that use speech to represent children's input and/or to establish production benchmarks. To enable the field to make the most of this unique empirical and conceptual contribution, we outline what this reverse engineering approach from long-form recordings entails, why it is useful, and how to evaluate success. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Chapter
In diesem Kapitel untersuchen wir den Verlauf der pränatalen Entwicklung – einer Zeit erstaunlich schnellen und dramatischen Wandels. Wir werden Störeinflüsse und Umweltgefahren, die den sich entwickelnden Fötus schädigen können, betrachten. Danach behandeln wir in Kürze den Prozess der Geburt und besondere Verhaltensaspekte des Neugeborenen. Schließlich diskutieren wir Probleme, die mit geringem Geburtsgewicht und Frühgeburt einhergehen.
Chapter
Contactless human-computer systems could be a noticeable advance in Intelligent Systems and Computing, fostering the development of future technologies with applications in various areas from AI to Medicine. In contactless human-computer systems, shared intentionality should play a key role in the deep learning of artificial neural networks that would provide adequate replacement of missing body parts. This concept design introduces a possible trend of further research on shared intentionality. The paper’s central idea is that any biological system tends to goal-directed coherence–shared intentionality is an essential quality of humans. The article presents a hypothesis of the neurobiological foundations of shared intentionality. The paper reviews studies of different biological systems and suggests the main qualities of goal-directed coherence that are common to all: instantaneousness in time, independence from a distance, insensitivity to sensory perception. The article also notes recent findings in physics. The integrated analysis poses the research question: can a single harmonic oscillator induce quantum entanglement between neurons and a computer interface. The author believes that an answer to the problem gives us an understanding of whether the human brain can directly govern the computer without introducing artificial elements into the nervous system. A valuable outcome of this ongoing research project about possible contactless brain-computer interaction may be direct or indirect evidence of (i) contactless brain-computer interaction; (ii) the synergy of this interaction. This synergy is an additional outcome that cannot be achieved separately by both sides of this interaction.
Chapter
Phobia is an anxiety disorder that affects 13% of the world’s population and it is manifested through an extreme and irrational fear toward objects or situations. A lot of research has been done on studying the contributing factors to the onset, development, and maintenance of phobias, underlying cognitive and behavioral processes, physical manifestation, and treatment methods. In this chapter, we present a new automated approach to phobias therapy, based on the integration of virtual reality technology, artificial intelligence, and affective computing—the ability of computational machines to recognize, adapt, and respond intelligently to human emotions. First, we introduce the results obtained by applying various machine learning algorithms for classifying the six basic emotions—anger, disgust, fear, joy, sadness, and surprise—based on the physiological recordings and self-reported ratings from the DEAP database. Then we present the stages of development and evaluation of a virtual environment for treating acrophobia that relies on gradual exposure to stimuli, accompanied by physiological signals monitoring. In a pilot experiment, we proposed a model for automatically adapting exposure scenarios according to the users’ emotional states, which has been tested on four acrophobic subjects. Here, the results obtained pass through a critical analysis in which we discuss the progress of the research and current limitations. Finally, we introduce the conceptual design of a system for phobias treatment that can be used as a therapy tool in clinical setups and also for home training, integrating virtual reality, machine learning, gamification techniques, biophysical data recording, emotion recognition, and automatic scenario adaptation. The system uses an intelligent virtual therapist that can understand emotions based on physiological signals, provides encouragement, and changes the level of exposure according to the user’s affective states.
Article
Full-text available
In early infancy, melody provides the most salient prosodic element for language acquisition and there is huge evidence for infants’ precocious aptitudes for musical and speech melody perception. Yet, a lack of knowledge remains with respect to melody patterns of infants’ vocalisations. In a search for developmental regularities of cry and non-cry vocalisations and for building blocks of prosody (intonation) over the first 6 months of life, more than 67,500 melodies (fundamental frequency contours) of 277 healthy infants from monolingual German families were quantitatively analysed. Based on objective criteria, vocalisations with well-identifiable melodies were grouped into those exhibiting a simple (single-arc) or complex (multiple-arc) melody pattern. Longitudinal analysis using fractional polynomial multi-level mixed effects logistic regression models were applied to these patterns. A significant age (but not sex) dependent developmental pattern towards more complexity was demonstrated in both vocalisation types over the observation period. The theoretical concept of melody development (MD-Model) contends that melody complexification is an important building block on the path towards language. Recognition of this developmental process will considerably improve not only our understanding of early preparatory processes for language acquisition, but most importantly also allow for the creation of clinically robust risk markers for developmental language disorders.
Article
Full-text available
Objective: the transition period in which men become fathers might provide an important window of opportunity for parenting interventions that may produce long-term positive effects on paternal care and, consequently, child development. Existing prenatal programs traditionally focus on maternal and infant health and seldom involve the father. Study design: This paper describes an interaction-based prenatal parenting intervention program for first-time fathers using ultrasound images, the Prenatal video Feedback Intervention to promote Positive Parenting (VIPP-PRE). We randomised a group of expectant fathers (N = 73) to either the VIPP-PRE or a control condition. Results: Expectant fathers thought the VIPP-PRE was more helpful and influenced their insights into their babies to a greater extent than the control condition. Expectant fathers receiving the VIPP-PRE reported that they particularly liked seeing and interacting with their unborn children as well as receiving feedback on these interactions. The intervention was well received and was considered feasible by both expectant fathers and sonographers and midwives. Discussion: We discuss the VIPP-PRE based on the experiences and perspectives of fathers, interveners, and sonographers and midwives.
Chapter
Da die Verarbeitung von Musik und von Sprache vor allem in den ersten beiden Lebensjahren eng miteinander verbunden und die rhythmisch-prosodische (musikalische) Strukturierung der an das Kind gerichteten Sprache grundlegend für den Spracherwerb des Kindes sind, ergeben sich viele Ansatzpunkte für eine musikalische Förderung oder Therapie. Ein weiterer Ansatzpunkt ist das Verständnis von Musik als Code und Ausdrucksform, mit denen wir Menschen kommunizieren und interagieren. Im Kapitel werden diese Ansatzpunkte für eine musikbasierte Sprach- und Kommunikationsförderung und -therapie ebenso dargestellt wie Befunde zur Musik- und Prosodieverarbeitung im frühen Spracherwerb und zu musikalischen Transfereffekten.
Article
Full-text available
Psychological changes in pregnant women induce communications towards the fetus. Maternal vocal behavior induces fetal reactions. Prenatal communication takes fetuses and newborns to perform auditory discriminations. Fetuses in risk pregnancies present difficulties in these matters. To study possible changes in vocal activity during healthy and risk pregnancies, SMPNC was organized with 28 items related with maternal vocal communication. SMPNC was submitted to pregnant women waiting for sonograms at the third trimester. Adequate indexes (KMO = .888; BTS, χ2 = 2792.795, df = 378, p = .000) allowed the performance of several factorial analysis. Analysis with Equamax rotation identified five factors: Verbal Communication With the Baby, Engaging the Partner in the Communication Towards the Baby, Contents Shared in the Communication Towards the Baby, Perception About the Baby’s Auditory Competence and Availability for the Communication Towards the Baby. Together the items of these factors constitute a Total Scale. Subscales and Total Scale were correlated with sociodemographic and clinical variables. Some subscales of SMPNC present negative correlations with variables related with reproductive status. These results suggest that maternal motivation for prenatal communication is especially high during the first gestation and decreases as mothers must divide their maternal attention for several sons.
Article
Full-text available
La première partie de cet article résume les recherches réalisées sur les comportements d’attachement de l’enfant à la mère et/ou au père et de bonding des parents à l’enfant ainsi que sur les interactions neurobiologiques qui ont lieu pendant la grossesse et à la naissance entre la mère et son enfant.La seconde partie présente les données d’une étude microanalytique de 31 vidéos des premières interactions enfant-parents qui construisent les liens émotionnels enfant-parents en trois étapes lors de la naissance. La première étape est celle de l’attachement et du bonding qui résultent d’interactions entre la cascade de réactions de défense du système PEUR du nouveau-né et les réflexes de protection et d’apaisement des système PEUR et/ou SOIN de la mère et/ou du père. Les réactions les plus intenses du nouveau-né, collapsus et immobilité tonique, étaient significativement corrélées au stress prénatal maternel (p = 0,015) et pourraient en augmenter les risques de troubles du développement physique et mental de l’enfant. La deuxième étape consiste en échanges de regards qui peuvent apaiser le nouveau-né et déclencher la formation d’un lien amoureux avec sa mère et/ou son père s’ils sont émotionnellement disponibles pour accepter le regard de leur enfant.L’allaitement au sein peut constituer une troisième étape dans la construction et surtout le renforcement des liens entre la mère et son enfant.Une première implication de ces données concerne l’éducation anténatale à la parentalité qui devrait informer les parents du rôle des premières interactions à la naissance dans la formation des liens et les préparer, surtout les mères, à être disponibles pour apaiser leur enfant et s’engager dans la rencontre de son regard. Pour les professionnels, une autre implication est de ne pas entraver les premières interactions entre le nouveau-né, sa mère et/ou son père lors de l’accouchement car elles sont constitutives des liens d’attachement et de bonding.
Article
There is increasing evidence of ongoing changes occurring in short-term and long-term motor and language outcomes in former premature infants. As rates of moderate to severe cerebral palsy (CP) have decreased, there has been increased awareness of the impact of mild CP and of developmental coordination disorder on the preterm population. Language delays and disorders continue to be among the most common outcomes. In conjunction with medical morbidities, there is increased awareness of the negative impact of family psycho-socioeconomic adversities on preterm outcomes and of the importance of intervention for these adversities beginning in the neonatal ICU.
Chapter
Research studies over the past 40 years have established that the maternal voice is a prominent feature of the prenatal environment, that the fetus responds to it, and that prenatal learning carries over into early postnatal life. The primary aim of the chapter is to describe what is known through research about prenatal exposure to the mother’s voice, especially through audition. A second aim is to present a consideration of nonauditory experience such as the vestibular and, possibly, cutaneous sensations that are uniquely linked to auditory stimulation by the maternal voice. A third aim is to raise a question about the necessity of prenatal experience with the acoustic aspects of the maternal voice, given emerging data from deaf infants who receive cochlear implants many months after birth. The chapter concludes by considering implications for the care of hospitalized preterm infants who experience atypical experience with the mother’s voice and other sounds. Chapter conclusions are (1) fetal auditory experience with the mother’s voice begins around 24 weeks after conception, (2) the maternal voice is potentially a rich source of multimodal stimulation and information, and (3) for favorable postnatal development, the role that is played by very early exposure to the maternal voice is not yet understood.
Article
Full-text available
Pregnant women recited a short child's rhyme, “the target”, aloud each day between the thirty third and thirty seventh weeks of their fetuses' gestation. Then their fetuses were stimulated with tape recordings of the target and a control rhyme. The target elicited a decrease in fetal heartrate whereas the control did not. Thus, fetuses' exposure to specific speech sounds can affect their subsequent reactions to those sounds. More generally, the result suggests that third trimester fetuses become familiar with recurrent, maternal speech sounds.
Article
Full-text available
A counterbalanced between-groups design with repeated measures was used to demonstrate that both male and female neonates would habituate and dishabituate to repeated and novel speech sounds. 24 full-term newborns with a birth weight greater than 2,400 g and a mean age of 72.2 hrs served as Ss in a head-turning sound-localization task. Experimental Ss listened to repetitions of a 2-syllable word until they turned toward the sound. They then heard a new word in the dishabituation phase. This was followed by trials with the original word. Results indicate the reliable occurrence of 2 basic processes in the neonate: spatial orientation to sounds and response decrement to repeated speech sounds followed by response increment to novel speech sounds. (18 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
Using a habituation/dishabituation procedure, near-term foetuses (36-39 weeks gestational age) were tested in a low variability HR state, to examine whether they could discriminate between a male and a female voice repeatedly uttering the same short sentence. Prosody and loudness of the two voices were controlled. Once the foetal heart rate (HR) habituated to the first voice, the effect of a second voice was investigated in two experimental conditions: male/female voice and female/male voice. HR variations after the onset of the second voice were compared to those occurring in two control conditions in which the same voice was presented twice (male/female voice and female/female voice). Highly conservative statistical criteria taking each subject's pre-stimulus HR variability into account showed that most foetuses exposed to the voice change displayed decelerative cardiac changes, with no significant difference between the two conditions. These HR decelerations were found in the first seconds following the onset of the new voice, and reached their peak amplitude within 10 s in most subjects. These responses lasted more than 10 s for two-thirds of the experimental subjects. Mostly transient HR accelerations and only a few decelerative changes were recorded in the control subjects. Furthermore, mean amplitudes of these changes were significantly lower than the HR decelerations induced by the new voice in the experimental conditions, suggesting that the latter were not spontaneous HR modifications but rather cardiac responses to the voice change. It is argued that near-term foetuses may perceive a difference between voice characteristics of two speakers when they are highly contrasted for fundamental frequency and timbre.
Article
Full-text available
The first steps toward bilingual language acquisition have already begun at birth. When tested on their preference for English versus Tagalog, newborns whose mothers spoke only English during pregnancy showed a robust preference for English. In contrast, newborns whose mothers spoke both English and Tagalog regularly during pregnancy showed equal preference for both languages. A group of newborns whose mothers had spoken both Chinese and English showed an intermediate pattern of preference for Tagalog over English. Preference for two languages does not suggest confusion between them, however. Study 2 showed that both English monolingual newborns and Tagalog-English bilingual newborns could discriminate English from Tagalog. The same perceptual and learning mechanisms that support acquisition in a monolingual environment thus also naturally support bilingual acquisition.
Article
Full-text available
Human fetuses (35-38 ws GA), exposed to a repeated pair of syllables, either [ba] [bi] or [bi] [ba], at 95 dB SPL when in a low heart rate variability state, display a significant heart rate deceleration. Changing the order of the syllables in the pair, [ba] [bi] becoming [bi] [ba] (or the reverse), induces a new cardiac deceleration. This suggests that fetuses demonstrate auditory discrimination abilities for speech units like syllables.
Article
Full-text available
Accelerative and decelerative cardiac responses and motor responses (leg movements) of 37-40 weeks (G.A.) fetuses are analyzed as a function of the frequency of three octave-band noises (respectively centered at 500 Hz, 2000 Hz and 5000 Hz) and of their intensity level (100, 105, 110 dB SPL, ex utero), during high (HV) and low (LV) heart rate (HR) variability pattern states. In both states, increasing the frequency and/or the intensity of the acoustic stimulation: (i) increases the ratios and amplitudes of accelerations, and the motor response ratios, (ii) reduces deceleration ratios and motor response latencies. Cardiac and motor reactiveness are higher in HV than in LV with acceleration ratios always greater than motor ones. However, when a high intensity and/or frequency is used, the reactiveness differences between states disappears. Low intensity and/or frequency stimulation levels induce a majority of decelerations.
Article
Full-text available
By sucking on a nonnutritive nipple in different ways, a newborn human could produce either its mother's voice or the voice of another female. Infants learned how to produce the mother's voice and produced it more often than the other voice. The neonate's preference for the maternal voice suggests that the period shortly after birth may be important for initiating infant bonding to the mother.
Article
Full-text available
Maturation of fetal response to music was characterized over the last trimester of pregnancy using a 5-minute piano recording of Brahms' Lullaby, played at an average of 95, 100, 105 or 110 dB (A). Within 30 seconds of the onset of the music, the youngest fetuses (28-32 weeks GA) showed a heart rate increase limited to the two highest dB levels; over gestation, the threshold level decreased and a response shift from acceleration to deceleration was observed for the lower dB levels, indicating attention to the stimulus. Over 5 minutes of music, fetuses older than 33 weeks GA showed a sustained increase in heart rate; body movement changes occurred at 35 weeks GA. These findings suggest a change in processing of complex sounds at around 33 weeks GA, with responding limited to the acoustic properties of the signal in younger fetuses but attention playing a role in older fetuses.
Article
Full-text available
Recent observation of maternal voice recognition provides evidence of rudimentary memory and learning in healthy term fetuses. However, such higher order auditory processing has not been examined in the presence of maternal hypertension, which is associated with reduced and/or impaired uteroplacental blood flow. In this study, voice processing was examined in 40 fetuses (gestational ages of 33 to 41 weeks) of hypertensive and normotensive women. Fetuses received 2 min of no sound, 2 min of a tape-recorded story read by their mothers or by a female stranger, and 2 min of no sound while fetal heart rate was recorded. Results demonstrated that fetuses in the normotensive group had heart rate accelerations during the playing of their mother's voice, whereas the response occurred in the hypertensive group following maternal voice offset. Across all fetuses, a greater fetal heart rate change was observed when the amniotic fluid index was above compared to below the median (i.e., 150 mm), indicating that amniotic fluid volume may be an independent moderator of fetal auditory sensitivity. It was concluded that differential fetal responding to the mother's voice in pregnancies complicated by maternal hypertension may reflect functional elevation of sensorineural threshold or a delay in auditory system maturation, signifying functional differences during fetal life or subtle differences in the development of the central nervous system.
Article
Harvard UniversityA counterbalanced between-groups design with repeated measures was used todemonstrate that both male and female neonates would habituate and dishabituateto repeated and novel speech sounds. Twenty-four full-term newborns with a birthweight greater than 2,400 grams and a mean age of 72.2 hours served as subjectsin a head-turning sound-localization task. The results clearly indicated the reliableoccurrence of two basic processes in the neonate: spatial orientation to soundsand response decrement to repeated speech sounds followed by response incrementto novel speech sounds.
Article
Several studies have found that human infants recognize the sight, sound, smell, and touch of their mothers. Maternal recognition occurs early in development, often being influenced by prenatal experiences. In contrast, the development of infants' recognition of their fathers is not understood. We investigated whether 4-month-old human infants preferrred their fathers' voices, in two different speaking contexts. In both Experiments 1 and 2, infants were tested with fathers' adult-directed (AD) or infant-directed (ID) speech. In all experiments, infants were allowed to listen to recordings of either father's or other's voice contingent on their visual attention. Results from the first two experiments showed that infants did not prefer their fathers' voices over unfamiliar male voices. However, in Experiment 3, 4-month-olds showed that they could discriminate the male voices heard in the previous studies. These data were interpreted as supporting the hypothesis that the experiences necessary for the development of maternal preferences are different from those supporting paternal preferences, and that perhaps multimodal cues are necessary for father recognition in infancy. (C) 1999 John Wiley & Sons, Inc.
Article
This project traced the maturation of the human auditory cortex from midgestation to young adulthood, using immunostaining of axonal neurofilaments to determine the time of onset of rapid conduction. The study identified 3 developmental periods, each characterized by maturation of a different axonal system. During the perinatal period (3rd trimester to 4th postnatal month), neurofilament expression occurs only in axons of the marginal layer. These axons drive the structural and functional development of cells in the deeper cortical layers, but do not relay external stimuli. In early childhood (6 months to 5 years), maturing thalamocortical afferents to the deeper cortical layers are the first source of input to the auditory cortex from lower levels of the auditory system. During later childhood (5 to 12 years), maturation of commissural and association axons in the superficial cortical layers allows communication between different subdivisions of the auditory cortex, thus forming a basis for more complex cortical processing of auditory stimuli.
Article
The intrauterine environment presents a rich array of sensory stimuli to which the fetus responds. The maternal voice is perhaps the most salient of all auditory stimuli. The following experiments examined the movement response of the fetus and newborn to its mother's voice and a strange female's voice and to voices speaking normally and speaking 'motherese'. Newborns (2-4 days of age) discriminated, as measured by the number of movements exhibited to the presentation of the stimuli, between their mother's voice and a stranger's voice and between normal speech and 'motherese', in both cases the former being preferred. Fetuses, 36 weeks of gestational age, evidenced no ability to discriminate between their mother's and a stranger's voice played to them via a loudspeaker on the abdomen but did discriminate between their mother's voice when played to them by a loudspeaker on the abdomen and the mother's voice produced by her speaking. The results are further evidence of the ability of the fetus to learn prenatally and indicate a possible role for prenatal experience of voices in subsequent language development and attachment.
Article
The developmental origins of the human mind have been the subject of much speculation. This paper reviews studies of the behaviour of the human fetus, to assess the beginnings of mind. Whilst this evidence indicates that the fetus is bchaviourally active before birch, it does not answer questions central to the existence of the mind directly, i.e. the presence of self-awareness, consciousness. However, the behaviour of the fetus may be used to explore the development of mind. Study of the prenatal ontogenesis of behaviour suggests that the mind will emerge in an immature form and that stimulation received in utero, and the behaviour emitted, will play an important role in its development.
Article
Four experiments are described which investigated the role of the mother's voice in facilitating recognition of the mother's face at birth. Experiment 1 replicated our previous findings (Br. J. Dev. Psychol. 1989; 7: 3-15; The origins of human face perception by very young infants. Ph.D. Thesis, University of Glasgow, Scot- land, UK, 1990) indicating a preference for the mother's face when a control for the mother's voice and odours was used only during the testing. A second experiment adopted the same procedures, but controlled for the mother's voice from birth through testing. The neonates were at no time exposed to their mother's voice. Under these conditions, no preference was found. Further, neonates showed only few head turns towards both the mother and the stranger during the testing. Experiment 3 looked at the number of head turns under conditions where the newborn infants were exposed to both the mother's voice and face from birth to 5 to 15min prior to testing. Again, a strong preference for the mother's face was demonstrated. Such preference, however, vanished in Experiment 4, when neonates had no previous exposure to the mother's voice-face combination. The conclusion drawn is that a prior experience with both the mother's voice and face is necessary for the development of face recognition, and that intermodal perception is evident at birth. The neonates' ability to recognize the face of the mother is most likely to be rooted in prenatal learning of the mother's voice. Copyright # 2004 John Wiley & Sons, Ltd.
Article
Exp I, with 58 newborns, found that newborns who had been exposed to the theme tune of a popular TV program during pregnancy exhibited changes in heartrate, number of movements, and behavioral state 2–4 days after birth. Evidence of learning had disappeared by 21 days of age. Exp II, with 40 fetuses (29–37 wks of gestational age), found fetuses exhibited changes in their movements when played a tune heard previously during pregnancy. As with Exp I, this was not the result of postnatal or genetic factors and was specific to the tune learned. Fetuses increased their movements on hearing the tune; newborns decreased their movements. Results demonstrate that it is possible to assess fetal learning before and after birth. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
The developmental origins of the ability to hear have been the subject of much debate and speculation. Since time immemorial, there has been much anecdotal evidence that the fetus responds to sound. In contrast, until the late 19th Century, scientific evidence and opinion held that the newborn was deaf and only developed the ability to hear in the first weeks after birth. At the beginning of the 20th Century the prevailing scientific and clinical view changed and it was accepted that the newborn was able to hear at birth. This led to much speculation about, and limited experimental study of, when the individual first started to hear. Detailed study of the ontogency of auditory abilities had to wait until the 1980’s when scientific opinion regarding the abilities of the newborn changed and the ultrasound technology with which to observe the fetus in utero became widely available. Research thus commenced in earnest to investigate the response of the fetus to sound. Despite this increased interest in fetal hearing, experimental studies have concentrated upon the responsiveness to sound in late pregnancy and not on the developmental origine of auditory abilities. It is the aim of this review to examine the ontogenesis of hearing in the fetus.
Article
Newborn infants whose mothers were monolingual speakers of Spanish or English were tested with audio recordings of female strangers speaking either Spanish or English. Infant sucking controlled the presentation of auditory stimuli. Infants activated recordings of their native language for longer periods than the foreign language.
Article
Pregnant women recited a particular speech passage aloud each day during their last 6 weeks of pregnancy. Their newborns were tested with an operant-choice procedure to determine whether the sounds of the recited passage were more reinforcing than the sounds of a novel passage. The previously recited passage was more reinforcing. The reinforcing value of the two passages did not differ for a matched group of control subjects. Thus, third-trimester fetuses experience their mothers' speech sounds and that prenatal auditory experience can influence postnatal auditory preferences.
Article
Term fetuses discriminate their mother’s voice from a female stranger’s, suggesting recognition/learning of some property of her voice. Identification of the onset and maturation of the response would increase our understanding of the influence of environmental sounds on the development of sensory abilities and identify the period when speech and language might influence auditory processing. To characterize the onset and maturation of fetal heart rate response to the mother’s voice. 143 fetuses from 29 to 40 weeks gestational age (GA) received a standardized protocol: no-sound pre-voice baseline (2 min), audio recording of their mother reading a story (2 min), no-sound post-voice (2 min). The voice was delivered 10 cm above the maternal abdomen at an average of 95 dB A; heart rate was recorded continuously. For data analyses, fetuses were categorized into four age groups: 29–31, 32–34, 35–37, and > 37 weeks GA. Onset of response to the mother’s voice occurred at 32–34 weeks GA. From 32 to 37 weeks GA, there was an initial heart rate decrease followed by an increase. At term, there was a response shift to an initial heart rate increase. The percentage of fetuses responding increased over gestation from 46% at 32–34 weeks GA to 83% at term. A relatively long latency and sustained duration of the heart rate response suggest auditory processing, the formation of neural networks, above the level of the brainstem.
Article
Fetal speech and language abilities were examined in 104 low-risk fetuses at 33-41 weeks gestational age using a familiarization/novelty paradigm. Fetuses were familiarized with a tape recording of either their mother or a female stranger reading the same passage and subsequently presented with a novel speaker or language: Studies (1) & (2) the alternate voice, (3) the father's voice, and (4) a female stranger speaking in native English or a foreign language (Mandarin); heart rate was recorded continuously. Data analyses revealed a novelty response to the mother's voice and a novel foreign language. An offset response was observed following termination of the father's and a female stranger's voice. These findings provide evidence of fetal attention, memory, and learning of voices and language, indicating that newborn speech/language abilities have their origins before birth. They suggest that neural networks sensitive to properties of the mother's voice and native-language speech are being formed.
Article
In 2 experiments, the majority of 21 newborn infants who were maintained in an alert state consistently turned their heads toward a continuous sound source presented 90 degrees from midline. For most infants this orientation response was rather slow, taking median latencies of 2.5 sec to begin and 5.5 sec to end. The most important factors in producing this impressive response seem to be the method of holding the infants during testing and the nature of the auditory stimulus.
Article
We sought to determine the degree to which noises and voices are attenuated or enhanced as they pass into the uterus. In eight parturients, a hydrophone in the uterus was used to measure sound pressure levels for externally generated one-third-octave band noises, male and female voices, and the subject's voice. Low-frequency sounds (0.125 kHz) generated outside the mother were enhanced by an average of 3.7 dB. There was a gradual increase in attenuation for increasing frequencies, with a maximum attenuation of 10.0 dB at 4.0 kHz. Sound attenuation was slightly less if the insonation was from in front of the woman rather than behind. Intrauterine sound levels of the mother's voice were enhanced by an average of 5.2 dB, whereas external male and female voices were attenuated by 2.1 and 3.2 dB, respectively. The effect of frequency on attenuation, the differences between front and back insonation, and the differences between speakers in attenuation were all statistically significant. The intrauterine environment is rich with externally generated sounds. This may imply fetal risk from maternal noise exposure and may aid in understanding fetal imprinting from prenatal exposure to voices.
Article
The effect of the amount of amniotic fluid on the form of fetal general movements was studied longitudinally in 19 pregnancies complicated by premature rupture of the amniotic membranes (PROM). Before birth, general movements were studied weekly by means of 1-h ultrasound observations, performed under standardized conditions. In the early postnatal period, 11 of these infants were followed with video recordings of their spontaneous movements. In the fetus, speed and amplitude of general movements were directly related to the reduction in amniotic fluid. A moderate reduction of amniotic fluid was associated with a decrease in amplitude, while a more severe reduction of amniotic fluid caused a decrease in speed as well. Postnatally, the small amplitude and low speed showed a marked tendency to normalize between 1 and 5 weeks. These results are important for the qualitative assessment of motor behaviour in pregnancies with obstetrical complications that are associated with oligohydramnios (such as PROM or intra-uterine growth retardation).
Article
Infants stimulated with 8-s recordings of speech and voices reading numbers showed a discrimination between their own mothers' and alien voices. In general, the infants' heart rates rose more in response to their mothers' than to an alien voice. However, infants tested less than 24 h after birth responded with significant heart rate deceleration to the mother's spontaneous speech and to the mother reading numbers. Response to the father's voice was also deceleration but to all alien voices was acceleration. Older infants' responses also tended to be acceleratory to most stimuli. Results support the suggestion that sounds which are repeatedly experienced before birth (especially the mother's voice) become familiar to the fetus so that the neonate responds selectively by orienting to them during the first few hours after birth.
Article
Human newborns were tested with an operant choice procedure to determine whether they would prefer their fathers' voices to that of another male. No preference was observed. Subsequent testing revealed that they could discriminate between the voices but that the voices lacked reinforcing value. These results contrast sharply with newborns' perception of their mothers' voices, in particular, and female voices, in general. The data were interpreted as supporting an hypothesis that prenatal experience significantly influences human newborns' earliest voice preferences.
Article
Previous research has revealed that the human fetus responds to sound, but to date there has been little systematic investigation of the development of fetal hearing. The development of fetal behavioural responsiveness to pure tone auditory stimuli (100 Hz, 250 Hz, 500 Hz, 1000 Hz, and 3000 Hz) was examined from 19 to 35 weeks of gestational age. Stimuli were presented by a loudspeaker placed on the maternal abdomen and the fetus's response, a movement, recorded by ultrasound. The fetus responded first to the 500 Hz tone, where the first response was observed at 19 weeks of gestational age. The range of frequencies responded to expanded first downwards to lower frequencies, 100 Hz and 250 Hz, and then upwards to higher frequencies, 1000 Hz and 3000 Hz. At 27 weeks of gestational age, 96% of fetuses responded to the 250 Hz and 500 Hz tones but none responded to the 1000 Hz and 3000 Hz tones. Responsiveness to 1000 Hz and 3000 Hz tones was observed in all fetuses at 33 and 35 weeks of gestational age, respectively. For all frequencies there was a large decrease (20-30 dB) in the intensity level required to elicit a response as the fetus matured. The observed pattern of behavioural responsiveness reflects underlying maturation of the auditory system. The sensitivity of the fetus to sounds in the low frequency range may promote language acquisition and result in increased susceptibility to auditory system damage arising from exposure to intense low frequency sounds.
Article
The response of the premature fetus to speech stimuli was studied in 41 healthy pregnant patients at 26-34 weeks gestation. Speech stimuli consisted of repeated syllables ('ee' and 'ah') presented externally over the maternal abdomen at either 100, 105, or 110 decibels (dB). Sound stimuli were delivered during periods of both high and low fetal heart rate variability. During periods of low FHR variability, a decrease in fetal heart rate and an increase in the standard deviation of heart rate were found. During periods of high FHR variability, no significant change in either of these measures was observed. This is the first clear demonstration of heart rate responses to speech stimuli in the premature fetus. As is the case in the term fetus, this response is dependent on baseline heart rate variability which is the primary determinant of fetal state. The clinical usefulness of this finding may be limited by the magnitude of the response.
Article
The aim of this study was to investigate the functional development of cochlear active mechanisms and of the medial efferent olivocochlear system. Otoacoustic emissions (evoked and spontaneous) were recorded in 42 preterm neonates (conceptional age ranging form 33 to 39 weeks) and a control group of 20 young normal-hearing adults. Medial olivocochlear system activity was examined by coupling evoked otoacoustic emission recording to a contralateral stimulation. Otoacoustic emission recordings were carried out using the Otodynamic ILO88 software and hardware. The stimuli were unfiltered clicks and the contralateral stimulation was broad band noise of 50 and 70 dBSPL delivered by an Adam generator. The results revealed the presence of EOAEs and SOAEs from at least 33 weeks in humans, suggesting that the functional maturation of the outer hair cells is nearly complete at that age. The study further revealed that the contralateral stimulation had no effect on evoked otoacoustic emissions in preterm neonates. The lack of activity observed in medial olivocochlear system indicated functional immaturity here, at least before full-term birth.
In neonates and infants, hearing impairment leads to impaired language and cognitive development. For that reason, early detection of this sensory deficit is of outstanding importance, particularly in pre-term neonates, who constitute a high risk population in regard to very early acquired hearing loss. Evoked (EOAE) and spontaneous otoacoustic emission (SOAE) recording in 93 pre-term and full-term neonates revealed that this technique is potentially useful for auditory screening in neonatology units. EOAEs and SOAEs can be recorded successfully from 30 weeks of conceptional age. SOAEs were found to be prevalent in females and presented higher peak numbers in right than in left ears. Furthermore, SOAE incidence in pre-term and full-term neonates was found to be high in EOAE positive ears, associated with strong and robust EOAEs.
Article
In attempting to correlate developmental anatomical data with electrophysiological data on maturation of the auditory brain stem response (ABR), a model of ABR generation was necessary to match neuroanatomical structures to ABR components. This model has been developed by reviewing quantitative studies of human brain stem nuclei, results of intrasurgical recordings, studies of correlation of pathology with ABR waveform alterations, and findings from direct stimulation of the human cochlear nuclei through a brain stem implant device. Based on this material, it was assumed that waves I and II are generated peripherally in the auditory nerve and that waves III, IV, and V are generated centrally, i.e., by brain stem structures. It was further assumed that wave III is generated by axons emerging from the cochlear nuclei in the ventral acoustic stria and that waves IV and V reflect activity in parallel subpopulations of these ascending axons at a higher brain stem level. Beyond the cochlear nucleus, the largest component of the brain stem auditory pathway consists of axons projecting without interruption from the cochlear nuclei to the contralateral lateral lemniscus and inferior colliculus. In the proposed model of ABR generation, the III-IV interwave interval is assumed to reflect only axonal conduction in this asynaptic pathway. Electrophysiological data from infants indicate that the III-IV interwave interval becomes adult-like by the time of term birth. The second largest component of the brain stem auditory pathway is the bilateral projection through the medial olivary nucleus. The model assumes that activity in this monosynaptic pathway, consisting of axonal conduction time plus one synaptic delay, is reflected in the III-V interwave interval. If both of the preceding assumptions are true, the IV-V interwave interval represents the difference between the two pathways, i.e., the time of transmission across one synapse. The electrophysiological ABR data indicates that the IV-V interval does not mature until one year of age. It is also possible to apply this model to the peripherally generated portion of the ABR. The I-II interwave interval, assumed to solely represent conduction in VIIIth nerve axons, is adult-like before the time of term birth. The II-III interval, presumed to contain a synapse in the cochlear nuclear complex, does not reach an adult level until between 1 and 2 yr postnatal age.
Article
The cardiac orienting reflex is elicited by a low-intensity sound, it consists of a sustained heart rate (HR) deceleration, and it is a specific physiological correlate of cognitive processing. In this study we examined the relationship between behavioral state and the cardiac orienting reflex in 75 human fetuses between 36 and 40 weeks gestation. Each fetus was stimulated with a 30-s speech sound at an average intensity of 83 dB SPL in quiet sleep (QS) and active sleep (AS). The fetal cardiac electrical signal was captured transabdominally at a rate of 1024 Hz and fetal R-waves were extracted using adaptive signal processing. Fetal behavioral states were assigned based on HR pattern and the presence or absence of eye and general body movements. We found that a significant HR deceleration occurred, in both QS and AS, following stimulus onset. However, HR decelerations occurred more often in QS than AS; and for fetuses exhibiting a HR deceleration, the magnitude of the deceleration was greater in AS compared to QS. In addition, in AS female fetuses exhibited a larger, more sustained HR deceleratory response than male fetuses, but the seconds x gender interaction in QS was not significant. Based on these results, we concluded that behavioral state is an important determinant of the HR deceleratory response in human fetuses.
Article
Several studies have found that human infants recognize the sight, sound, smell, and touch of their mothers. Maternal recognition occurs early in development, often being influenced by prenatal experiences. In contrast, the development of infants' recognition of their fathers is not understood. We investigated whether 4-month-old human infants preferred their fathers' voices, in two different speaking contexts. In both Experiments 1 and 2, infants were tested with fathers' adult-directed (AD) or infant-directed (ID) speech. In all experiments, infants were allowed to listen to recordings of either father's or other's voice contingent on their visual attention. Results from the first two experiments showed that infants did not prefer their fathers' voices over unfamiliar male voices. However, in Experiment 3, 4-month-olds showed that they could discriminate the male voices heard in the previous studies. These data were interpreted as supporting the hypothesis that the experiences necessary for the development of maternal preferences are different from those supporting paternal preferences, and that perhaps multimodal cues are necessary for father recognition in infancy.
Article
The purpose of the study was to characterize the onset and maturation of airborne sound-elicited responses in low- and high-risk preterm fetuses. In Study 1, a total of 91 low-risk fetuses at 27, 30, 33, and 36 weeks GA received three sound trials at 90, 100, 105 and 110 dB and three no-stimulus control trials. The onset of cardiac acceleration and body movement responses occurred at 30 weeks GA. Maturation of the cardiac response was observed with a decrease in threshold from 105-110 dB at 33 weeks GA to 100-105 dB at 36 weeks GA. In Study 2, the procedure was similar except that the 43 high-risk fetuses at 27, 30 and 33 weeks GA did not receive sounds at 90 dB. For the high-risk fetuses, the onset of cardiac and motor responses also occurred at 30 weeks GA. At 33 weeks GA, those high-risk fetuses subsequently born at term showed an increased magnitude of the cardiac acceleration response compared to low-risk fetuses. The results indicate that both low- and high-risk fetuses begin responding to sounds at the same gestational age. Differential responses observed over gestation in the high-risk group most likely indicate differential functional development of the auditory-response system.
Article
The human fetus in utero is able to respond to sounds in the amniotic fluid enveloping the fetus after about 20 weeks gestation. The pathway by which sound reaches and activates the fetal inner ear is not entirely known. It has been suggested that in this total fluid environment, the tympanic membrane and the round window membrane become 'transparent' to the sound field, enabling the sounds to reach the inner ear directly through the tympanic membrane and the round window membrane. It is also possible that sounds reach the inner ear by means of tympanic membrane--ossicular chain--stapes footplate conduction (as in normal air conduction). There is also evidence that sounds reach the fetal inner ear by bone conduction. Several animal and human models of the fetus in utero were studied here in order to investigate the pathway enabling sounds to reach and activate the fetal inner ear. This included studying the auditory responses to sound stimuli of animals and humans under water. It was clearly shown in all the models that the dominant mechanism was bone conduction, with little if any contribution from the external and middle ears. Based on earlier experiments on the mechanism and pathway of bone conduction, the results of this study lead to the suggestion that the skull bone vibrations induced by the sound field in the amniotic fluid enveloping the fetus probably give rise to a sound field within the fetal cranial cavity (brain and CSF) which reaches the fetal inner ear through fluid communication channels connecting the cranial cavity and the inner ear.
Article
After at least 20 weeks gestation, the human fetus in utero is able to hear and respond to external and internal (maternal) sounds. The external sounds are attenuated by maternal tissues and fluids - higher frequencies by about 20 dB, and lower frequencies are only slightly reduced. The sounds in the amniotic fluid, which completely envelops the fetus, then reach the fetal inner ear by bone conduction. The sound pressure in the amniotic fluid induces skull vibrations which are transmitted directly into the contents of the cranial cavity (brain and CSF) and from there, presumably by fluid channels connecting them, into the cochlear fluids. A further stage of conductive attenuation is probably involved in this transmission. Since the fetus in utero receives oxygen by placental diffusion (less efficient than pulmonary diffusion), the fetal inner ear is hypoxic compared to that following birth (pulmonary oxygen diffusion). This leads to a reduction in the magnitude of the endocochlear potential, to a depression of cochlear transduction and amplification, and thus to an additional sensorineural component of threshold elevation in the fetus. Upon birth, these conductive and sensorineural attenuations are removed.
This project traced the maturation of the human auditory cortex from midgestation to young adulthood, using immunostaining of axonal neurofilaments to determine the time of onset of rapid conduction. The study identified 3 developmental periods, each characterized by maturation of a different axonal system. During the perinatal period (3rd trimester to 4th postnatal month), neurofilament expression occurs only in axons of the marginal layer. These axons drive the structural and functional development of cells in the deeper cortical layers, but do not relay external stimuli. In early childhood (6 months to 5 years), maturing thalamocortical afferents to the deeper cortical layers are the first source of input to the auditory cortex from lower levels of the auditory system. During later childhood (5 to 12 years), maturation of commissural and association axons in the superficial cortical layers allows communication between different subdivisions of the auditory cortex, thus forming a basis for more complex cortical processing of auditory stimuli.