Article

Does a ‘musical’ mother tongue influence cry melodies? A comparative study of Swedish and German newborns

Authors:
  • Department of Social and Behavioral Studies
  • German Center for Growth Development and Health Promotion in Childhood and Adolescence ( DeuZ-WEG eV ) Berlin
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The foetal environment is filled with a variety of noises. Among the manifold sounds of the maternal respiratory, gastrointestinal and cardiovascular systems, the intonation properties of the maternal language are well perceived by the foetus, whose hearing system is already functioning during the last trimester of gestation. These intonation (melodic) features, reflecting native-language prosody, have been found to shape vocal learning. Having had ample opportunity to become familiar with their mother’s language in the womb, newborns have been found to exhibit salient pitch-based elements in their own cry melodies. An interesting issue is whether an intrauterine exposure to a maternal pitch accent language, such as Swedish, in which emphatic syllables are pronounced typically on a higher pitch relative to other syllables will affect newborns’ cry melody (fundamental frequency contour). The present study aimed to answer this question by quantitatively analysing and comparing the melody structure in 52 Swedish compared with 79 German newborns. In accordance with previous approaches, cry melody structure was analysed by calculating a melody complexity index (MCI) expressing the share of cries exhibiting two or more (well-defined) arc-like substructures uttered during the recording sessions. A low MCI reflects a dominance of cries with a ‘simple’, i.e. single-arc melody. A significantly higher MCI was found in the Swedish infant group, which further corroborates the assumption that the well-known foetal sensitivity for musical (melodic) stimuli seems to shape infants’ cry melody.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We all share the amazing capacity to produce, perceive and enjoy-or dislike-music, probably soon after we are born [26][27][28] (or perhaps even before (e.g. [29,30]), and music has a substantial capacity to affect our emotions [31,32]. ...
... He also highlights the example of reading and writing-both cultural inventions-which are each partially associated with functional specializations in specific brain regions ( product of neural plasticity) and in which disorders may sometimes be driven by genetic causes, at least for reading. However, musicality (unlike making fire, reading, writing or even music) is not a behaviour per se but a capacity that seems not to be taught or learned and appears to be present since early infancy [71][72][73][74], or even before delivery [29,30]. Thus, the question of whether music is an adaptation could be a dead-end (see [75]), but the origin of musicality is anything but. ...
Article
Full-text available
Studies show that specific vocal modulations, akin to those of infant-directed speech (IDS) and perhaps music, play a role in communicating intentions and mental states during human social interaction. Based on this, we propose a model for the evolution of musicality—the capacity to process musical information—in relation to human vocal communication. We suggest that a complex social environment, with strong social bonds, promoted the appearance of musicality-related abilities. These social bonds were not limited to those between offspring and mothers or other carers, although these may have been especially influential in view of altriciality of human infants. The model can be further tested in other species by comparing levels of sociality and complexity of vocal communication. By integrating several theories, our model presents a radically different view of musicality, not limited to specifically musical scenarios, but one in which this capacity originally evolved to aid parent–infant communication and bonding, and even today plays a role not only in music but also in IDS, as well as in some adult-directed speech contexts. This article is part of the theme issue ‘Voice modulation: from origin and mechanism to social impact (Part II)’.
... We all share the amazing capacity to produce, perceive and enjoy -or dislike -music, probably soon after we are born [25][26][27] (or perhaps even before [e.g. 28,29]), and music has a substantial capacity to affect our emotions [30,31]. ...
... He also highlights the example of reading and writing -both cultural inventions -which are each partially associated with functional specializations in specific brain regions (product of neural plasticity) and in which disorders may sometimes be driven by genetic causes, at least for reading. However, musicality (unlike making fire, reading, writing, or even music), is not a behaviour per se but a capacity that seems not to be taught or learned and appears to be present since early infancy [71][72][73][74], or even before delivery [28,29]. Thus, the question of whether music is an adaptation could be a dead end [see 75], but the origin of musicality is anything but. ...
Preprint
Full-text available
Studies show that specific vocal modulations, akin to those of infant-directed speech and perhaps music, play a role in communicating intentions and mental states during human social interaction. Based on this, we propose a model for the evolution of musicality –the capacity to process musical information– in relation to human vocal communication. We suggest that a complex social environment, with strong social bonds, promoted the appearance of musicality-related abilities. These social bonds were not limited to those between offspring and mothers or other carers, although these may have been especially influential in view of altriciality of human infants. The model can be further tested in other species by comparing levels of sociality and complexity of vocal communication. By integrating several theories, our model presents a radically different view of musicality, not limited to specifically musical scenarios, but one in which this capacity originally evolved to aid parent-infant communication and bonding, and even today plays a role, not only in music but also in infant-directed speech (IDS), as well as some adult-directed speech (ADS) contexts.
... In earlier work and, remarkably even more recently, infant crying has been viewed as essentially stereotypic, similar to primate calls 69-71 . This view has since been refuted 35,44,45,49,[72][73][74][75][76][77][78] . The mitigated, melodic cries of human infants are at some level, similar to simple musical melodies ("glissandi smoothly slurred or swept over a certain frequency interval" 17 ; p.643) and could provide raw material for prosodic constituents of later language 17,37,58 . ...
... a melody contour that is still rather simple i.e. single-arc-like (see Fig. 1a,b), melody becomes more complex, i.e. multiple-arc-like with increasing age (see Fig. 1c,d). By the second to third month of life and depending on age-specific factors like individual fitness 79 or sex hormone concentration during mini-puberty 73,80,81 or the surrounding language 75,82,83 , melody structure in both vocalisation types becomes more complex, i.e. multiplearc-like with increasing age. ...
Article
Full-text available
In early infancy, melody provides the most salient prosodic element for language acquisition and there is huge evidence for infants’ precocious aptitudes for musical and speech melody perception. Yet, a lack of knowledge remains with respect to melody patterns of infants’ vocalisations. In a search for developmental regularities of cry and non-cry vocalisations and for building blocks of prosody (intonation) over the first 6 months of life, more than 67,500 melodies (fundamental frequency contours) of 277 healthy infants from monolingual German families were quantitatively analysed. Based on objective criteria, vocalisations with well-identifiable melodies were grouped into those exhibiting a simple (single-arc) or complex (multiple-arc) melody pattern. Longitudinal analysis using fractional polynomial multi-level mixed effects logistic regression models were applied to these patterns. A significant age (but not sex) dependent developmental pattern towards more complexity was demonstrated in both vocalisation types over the observation period. The theoretical concept of melody development (MD-Model) contends that melody complexification is an important building block on the path towards language. Recognition of this developmental process will considerably improve not only our understanding of early preparatory processes for language acquisition, but most importantly also allow for the creation of clinically robust risk markers for developmental language disorders.
... . Dies ist die Voraussetzung dafür, um muttersprachliche melodische Elemente in die eigenen Melodien einzubauen (Dahlem, 2008;Prochnow, 2013;Prochnow, Erlandsson, Hesse & Wermke, 2017;Wermke et al., 2017;Wermke, Mende et al., 2002;Mampe, Friederici et al. 2009 (Harding, 1984a(Harding, , 1984b. Während die Phonationsmechanismen bei jungen Säuglingen bereits gut funktionsfähig sind, sind supralaryngeale artikulatorische Mechanismen noch nicht gut entwickelt (Kent & Vorperian, 1995 Rückkopplungsmechanismen (Titze, 2008). ...
... In (Dahlem, 2008;Prochnow, 2013;Prochnow et al., 2017 , 2005;Lester, 1987;Michelsson, 1971;Michelsson & Sirvio, 1976;Vohr et al., 1989; Wasz-Höckert, Michelsson & Lind, 1985 (Bishop et al., 1995;Newbury, Bishop & Monaco, 2005;Nudel et al., 2020;Tallal, Ross & Curtiss, 1989 ...
Thesis
In der vorliegenden Studie wurden gesunde Neugeborene mit unterschiedlichen Hörscreeningbefunden in den Eigenschaften der Grundfrequenzkontur ihrer spontanen Weinlaute untersucht. Ziel der vorliegenden Studie war es zu ermitteln, ob sich spontane Lautäußerungen von gesunden Neugeborenen mit einem unauffälligen Neugeborenenhörscreening (NHS) im Vergleich zu einem auffälligen NHS in den modellierten Grundfrequenzeigenschaften und der Melodielänge unterscheiden. Im Rahmen des Projektes wurden 82 gesunde Neugeborene (2.-4. Lebenstag) rekrutiert und je nach Ergebnis des routinemäßig durchgeführten NHS in zwei Gruppen eingeteilt. Diese waren Neugeborene mit unauffälligem NHS (Gruppe NHS_TU) und Neugeborene mit auffälligem NHS (Gruppe NHS_TA). In einer Nachkontrolle nach 3 Monaten wurde überprüft, ob die Neugeborene mit einem auffälligem NHS auch alle hörgesund waren. Es wurden insgesamt 2.330 spontane Lautäußerungen aufgenommen und quantitativ analysiert. Hierbei wurden die Melodielänge, das Minimum, das Maximum, die mittlere Grundfrequenz und der Grundfrequenzhub für jede Lautäußerung berechnet. Für jedes Neugeborene wurde ein arithmetischer Mittelwert für die analysierten Variablen gebildet und anschließend zwischen beiden Gruppen verglichen. Die Ergebnisse der vorliegenden Studie belegen, dass sich gesunde Neugeborene mit unterschiedlichen Hörscreeningbefunden nicht signifikant in ihren Grundfrequenzeigenschaften unterscheiden. Somit konnte bestätigt werden, dass sich gesunde Neugeborene mit auffälligem NHS, welche in der Nachuntersuchung hörgesund sind (Falsch-Positiv-Getestete), die gleichen Lauteigenschaften wie Neugeborene mit unauffälligem NHS aufweisen. Insgesamt konnte die vorliegende Studie erstmalig Eigenschaften der Grundfrequenzkontur spontaner Lautäußerungen von Neugeborenen mit einem unauffälligen NHS-Befund im Vergleich zu einem auffälligen NHS-Befund objektiv analysieren und entsprechende Referenzwerte für gesunde Neugeborene liefern. Somit wäre eine wichtige Voraussetzung dafür geschaffen, in nachfolgenden Studien zu untersuchen, ob die Melodiekontur ein potenzieller Frühindikator für eine sensorineurale Hörstörung bei Neugeborenen sein könnte.
... For a long time, infant crying has primarily been seen as a behavioral state in contrast to other states such as sleep, drowsiness and wakefulness, and one that points to a physiological state of distress [1][2][3][4]. This behavior, typical of the first months of life, can also be characterized as a social signal and a communicative behavior that marks the first steps toward language [5][6][7][8][9][10]. In the context of the precursor function to language, the melody structure (i.e., fundamental frequency contour) in particular appears to be of essential importance. ...
Article
Cry melody serves as a platform for the eventual development of expressive language. Complex melodic structures exist in the naturally occurring, discomfort cries of healthy term infants as young as 2-mos of age. To date, no study has analyzed the influence of distress on complexity of cry melody. The purpose of this study was to determine whether the distress cries produced across a group of 11 healthy 2-mos-old infants contained a comparable amount of complexity as spontaneous cries at that age. Results indicated a low occurrence of complex melody in distress cries at the age of 2 mos compared to past reports for same-aged infants producing non-distress spontaneous cries. Cries judged to be reflective of low distress were generally found to have a more complex melody. Collectively, the present study supports the hypothesis that complexity of cry melody is reduced in distress-elicited crying.
... This is consistent with the melodic contour of Mandarin (a tonal language) being more complex than that of German. Similarly, Prochnow and colleagues [54] analyzed the cries of Swedish newborns, and contrasted them with German newborns. Swedish is a pitch-accent language, and German is not. ...
Article
Full-text available
Whether human language evolved via a gestural or a vocal route remains an unresolved and contentious issue. Given the existence of two preconditions—a “language faculty” and the capacity for imitative learning both vocally and manually—there is no compelling evidence for gesture being inherently inferior to vocalization as a mode of linguistic expression; indeed, signed languages are capable of the same expressive range as spoken ones. Here, we revisit this conundrum, championing recent methodological advances in human neuroimaging (specifically, in utero functional magnetic resonance imaging) as a window into the role of the prenatal gestational period in language evolution, a critical, yet currently underexplored environment in which fetuses are exposed to, and become attuned to, spoken language. In this Unsolved Mystery, we outline how, compared to visual sensitivity, the ontogenically earlier development of auditory sensitivity, beginning in utero and persisting for several months post-partum, alongside the relative permeability of the uterine environment to sound, but not light, may constitute a small but significant contribution to the current dominance of spoken language.
... Intrauterine anticipatory perception is further demonstrated in studies where late-term foetuses discriminate their native language (mother tongue) from an unknown language [34]. More recent studies report intonation (melodic) features of native language prosody experienced in utero shapes postpartum vocal learning; newborns exhibit the same maternal pitch-based elements in their own cry melodies [35,36]. In a similar study testing foetal awareness of abdomen touch, foetal movement responses were found to be particular to maternal touch, reducing when their mother touched the abdomen, but not when another adult did so, suggesting foetal awareness of maternal-specific behaviour [37]. ...
Article
Full-text available
We review recent work that examines the genesis of a prereflective self-consciousness in utero in humans. We focus on observable behaviours that suggest a state of anticipatory perceptual awareness evident in the foetal period and the foetus' first expression of agency through self-generative engagement with it. This predictive, anticipatory awareness is first evident in the prospective sensorimotor organisation of bodily movements of the second-trimester foetus, revealing an early adaptive awareness and agency that establishes the foundation for additional forms of abstract, reflective, and conceptually backed conscious experience in adults. Advanced understanding of these early sensorimotor foundations of psychological development and health may afford a better understanding of adult human consciousness, the nature of its early ontogeny, and its particular expression mediated by the integrative nervous system.
... Intrauterine anticipatory perception is further demonstrated in studies where late-term foetuses discriminate their native language (mother tongue) from an unknown language [34]. More recent studies report intonation (melodic) features of native language prosody experienced in utero shapes postpartum vocal learning; newborns exhibit the same maternal pitch-based elements in their own cry melodies [35,36]. In a similar study testing foetal awareness of abdomen touch, foetal movement responses were found to be particular to maternal touch, reducing when their mother touched the abdomen, but not when another adult did so, suggesting foetal awareness of maternal-specific behaviour [37]. ...
Preprint
In this review, we look at recent work that examines the genesis of conscious awareness in utero in humans. We focus on observable behaviours that suggest a state of anticipatory perceptual awareness evident in the foetal period, and the foetus’ first expression of agency through self-generative engagement with it. This predictive, anticipatory awareness is first evident in the prospective sensorimotor organisation of bodily movements of the second trimester foetus, revealing an early adaptive awareness and agency that establishes the foundation for additional forms of abstract, reflective, and conceptually-backed conscious experience in adults. Improved understanding of these early sensorimotor foundations of psychological development and health affords an improved understanding adult human consciousness, the nature of its early ontogeny, and its particular expression mediated by the integrative nervous system.
... They can distinguish their native language from other languages not belonging to the same rhythm-family (e.g., French and Japanese), but not between languages sharing similar stress properties (e.g., English and Dutch) (Nazzi et al., 1998). Newborn infants incorporate their native language's stress patterns in their cry vocalisations (Mampe et al., 2009;Prochnow et al., 2019). Similarly, German-speaking 6-month-old infants prefer listening to stress patterns found in German (trochaic vs. iambic) (Höhle et al., 2009). ...
Article
Full-text available
While many theoretical proposals about the relationship between language and music processing have been proposed over the past 40 years, recent empirical advances have shed new light on this relationship. Many features are shared between language and music, inspiring research in the fields of linguistic theory, systematic musicology, and cognitive (neuro‐)science. This research has led to many and diverse findings, making comparisons difficult. In the current review, we propose a framework within which to organise past research and conduct future research, suggesting that past research has assumed either domain‐specificity or domain‐generality for language and music. Domain‐specific approaches theoretically and experimentally describe aspects of language and music processing assuming that there is shared (structure‐building) processing. Domain‐general approaches theoretically and experimentally describe how mechanisms such as cognitive control, attention or neural entrainment can explain language and music processing. Here we propose that combining elements from domain‐specific and domain‐general approaches can be beneficial for advances in theoretical and experimental work, as well as for diagnoses and interventions for atypical populations. We provide examples of past research which has implicitly merged domain‐specific and domain‐general assumptions, and suggest new experimental designs that can result from such a combination aiming to further our understanding of the human brain.
... On the one hand, it has been reported that newborn infant cry acoustics differ for children exposed to French versus German, two languages with distinctly different stress patterns (Mampe et al., 2009). Follow up studies comparing Swedish to German (Prochnow et al., 2017), and Mandarin to German have found similar results. There is some question though whether these results should be trusted since statistical analyses were at the cry utterance level rather than at the child level, which could mean that the results are primarily driven by individual differences in children's cry acoustics rather than being driven by language-related differences (Gustafson et al., 2017). ...
... 16,056). The "intensive training" of an already functioning auditory organ, which lasts for about 3 months in healthy foetuses, leaves traces in the brain that modulate not only auditory imprints but also the vocal production of the newborn [9][10][11]. ...
Article
Full-text available
Introduction: Vocants as infants' first vocalic utterances are produced laryngeally while the vocal tract is maintained in a neutral position. These "primitive" sounds have sometimes been described as largely innate and, therefore, as sounding alike in both healthy and hearing-impaired young infants. Objective: To compare melody features of vocants, recorded during face-to-face interaction, between infants (N=8) with profound congenital sensorineural hearing loss (HI group) and age-matched (N=18) controls (CO group). The question was: Does a lack of auditory feedback have a noticeable effect on melodic features of vocants? Methods: The cooing database totalled 6998 vocalizations (HI: N= 2847; CO: N= 4151), all of which had been recorded during the observation period of 60-181 days of age. Identification of the vocants (N=1148) was based on broadband spectrograms (KAY-CSL) and auditory impressions. Fundamental frequency (F0) analyses were performed (PRAAT) and the pattern of the F0 contour (melody) analysed using specific in-lab software (CDAP, pw-project). Generalized mixed linear models were used to perform group comparisons. Results: There was a clear predominance of a simple rising-falling pattern (single melody arcs) in vocants of both groups. Nonetheless, significantly more complex contours, particularly double-arc structures, were found in vocants of the CO group. Moreover, vocants of the HI group were shorter than those uttered by the CO group, while the mean F0 did not significantly differ. Conclusion: Vocants are characterized by both, innate features, found in HI and CO groups, and features that additionally require a functioning auditory system. Even at an early pre-linguistic stage, somatosensory sensations cannot compensate for a lack of auditory feedback. Vocants might be relevant in the early diagnosis of hearing disorders and assessments of the effectiveness of, or adjustments required to, hearing aids.
... They mainly perceive the time-varying fundamental frequency (f0), i.e., the melody, which is the most salient acoustic cue for young infants [5,6]. The aptitude and inborn intention to implement perceptive experience in vocal production start shortly after birth, when neonatal crying is shaped by speech melody of the surrounding language [7][8][9]. These imprints are traces of This the early maturity of the auditory system. ...
Article
Full-text available
Introduction: The fundamental frequency contour (melody) of cry and non-cry utterances become more complex with age. However, there is a lack of longitudinal analyses of melody development during the first year of life. Objective: The aim of the study is to longitudinally analyse melody development in typical vocalisation types across the first 12 months of life. The aim was twofold: (1) to answer the question whether melody becomes more complex in all vocalisation types with age, and (2) to characterize complex patterns in more detail. Methods: Repeatedly recorded vocalisations (n=10.988) of 10 healthy infants (6 female) over their first year of life were analysed using frequency spectrograms and fundamental frequency (f0) analyses (PRAAT). Melody complexity analysis was performed using specific in-lab software (CDAP, pw-project) in a final subset of 9.237 utterances that contained noise-free, undisturbed contours. Generalized mixed linear models were used to analyse age and vocalisation type effects on melody complexity. Results: The vocalization repertoire showed a higher proportion of complex melodies from the second month onwards. The age effect was significant, but no difference was found in melody complexity between cry and non-cry vocalizations across the first six months. From month 7-12, there was a further significant increase in complex structures only in canonical babbling, not in marginal babbling. Melody segmentations by laryngeal constrictions prevailed among complex shapes. Conclusion: The study demonstrated the regularity of melody development in different vocalisation types throughout the first year of life. In terms of prosodic features of infant sounds, melody contour is of primary importance, and further studies are required that also include infants at risk for language development.
... This cultural difference will affect all aspects of human life, including language. During the language learning process, humans are greatly influenced by the environment and their mother tongue (Putra, 2022;Prochnow, A., et al., 2019;Mnasri, S., & Habbash, M. 2021;Ganuza, N., & Hedman, C. 2015), and that is what already happens with language learner in Indonesia (Saddhono, K., & Rohmadi, M. 2014). ...
Article
Full-text available
This research discussed the Buginese and Javanese phonological interference and the factors that influenced that case from the students in senior high school. The method that was used in this research was qualitative research. The researchers collected the data with nine fricative consonants (f, v, θ, ð, s, z, ʃ, ʒ, h) by reading test, recording and interview, then analyzed with the theory by Weinreich (1979). The object was the Buginese and Javanese students of Senior High School 2 of East Luwu. The data showed that phonological interference produced by Buginese and Javanese are only two of three kinds of phonological interference by Crystal (2003). From Buginese students, the researchers only found one category of phonological interference which was sound replacements on consonant {f}, meanwhile on Javanese students found two categories of phonological interference, that were sound addition on sound {h} and sound replacement on sound {ʃ}. On the other hand, there are two factors that caused phonological interference of Buginese and Javanese students in this research, such as bilingual background and disloyalty to the target language. The factors that were found related to the factors mentioned by Weinreich (1979). Keywords: Phonological Interference, Buginese Phonological Interference, Javanese Phonological Interference
... Systematic differences in the acoustics of crying or the structure of "non-cry" background noise -related to children's age, household language, or social-economic circumstances -could all affect model performance. For example, neonates show native language-specific differences in the structure of their cry melodies (Prochnow et al., 2019). As such, we note that our DL model was tested on audio collected by 1-to 7-month-old infants from mostly English-speaking homes, who were 52% non-Hispanic White (15% Hispanic White, 7% Hispanic, 7% Black, 15% Hispanic Black, 4% multiracial) and whose mothers had a relatively high educational attainment (33% graduate school, 33% college diploma, 22% some college, 11% high school diploma or less). ...
Article
Full-text available
Human infant crying evolved as a signal to elicit parental care and actively influences caregiving behaviors as well as infant–caregiver interactions. Automated cry detection algorithms have become more popular in recent decades, and while some models exist, they have not been evaluated thoroughly on daylong naturalistic audio recordings. Here, we validate a novel deep learning cry detection model by testing it in assessment scenarios important to developmental researchers. We also evaluate the deep learning model’s performance relative to LENA’s cry classifier, one of the most commonly used commercial software systems for quantifying child crying. Broadly, we found that both deep learning and LENA model outputs showed convergent validity with human annotations of infant crying. However, the deep learning model had substantially higher accuracy metrics (recall, F1, kappa) and stronger correlations with human annotations at all timescales tested (24 h, 1 h, and 5 min) relative to LENA. On average, LENA underestimated infant crying by 50 min every 24 h relative to human annotations and the deep learning model. Additionally, daily infant crying times detected by both automated models were lower than parent-report estimates in the literature. We provide recommendations and solutions for leveraging automated algorithms to detect infant crying in the home and make our training data and model code open source and publicly available.
... And it is these characteristics of intonation and the rhythm behind each word that enable the production of meaning when communicating. Thus, when analyzing and quantitatively comparing the melodic structure of the crying of 52 Swedish newborns and 79 German newborns, researchers revealed that the prosody of the mother's language shapes the melody of the babies' crying (Prochnow et al., 2019). ...
... Arguably, intonation production starts from birth with crying (Mampe et al., 2009) and vocalisation shortly after birth (Kent and Murray, 1982). Newborn infants' crying patterns already reflect the intonation patterns of their native language (Mampe et al., 2009;Wermke et al., 2016Wermke et al., , 2017Manfredi et al., 2019;Prochnow et al., 2019). Infants begin with a predominant falling pitch contour then progress to other f 0 patterns, with accent range increasing with age (Snow, 2001). ...
Article
Full-text available
Fundamental frequency (ƒ0), perceived as pitch, is the first and arguably most salient auditory component humans are exposed to since the beginning of life. It carries multiple linguistic (e.g., word meaning) and paralinguistic (e.g., speakers’ emotion) functions in speech and communication. The mappings between these functions and ƒ0 features vary within a language and differ cross-linguistically. For instance, a rising pitch can be perceived as a question in English but a lexical tone in Mandarin. Such variations mean that infants must learn the specific mappings based on their respective linguistic and social environments. To date, canonical theoretical frameworks and most empirical studies do not view or consider the multi-functionality of ƒ0, but typically focus on individual functions. More importantly, despite the eventual mastery of ƒ0 in communication, it is unclear how infants learn to decompose and recognize these overlapping functions carried by ƒ0. In this paper, we review the symbioses and synergies of the lexical, intonational, and emotional functions that can be carried by ƒ0 and are being acquired throughout infancy. On the basis of our review, we put forward the Learnability Hypothesis that infants decompose and acquire multiple ƒ0 functions through native/environmental experiences. Under this hypothesis, we propose representative cases such as the synergy scenario, where infants use visual cues to disambiguate and decompose the different ƒ0 functions. Further, viable ways to test the scenarios derived from this hypothesis are suggested across auditory and visual modalities. Discovering how infants learn to master the diverse functions carried by ƒ0 can increase our understanding of linguistic systems, auditory processing and communication functions.
... Within the first few days of their lives, newborns appear to be able to discriminate affective prosody, as well as the characteristic prosody of their native language (Cheng et al., 2012;Friederici, 2006). The prosody of their native language even influences the "melody" of their crying (Mampe et al., 2009;Prochnow et al., 2019, the melodic complexity of which increases over the first few months of their life. Over the first four months, both the melodic contour and the interval content increase in complexity, and infants who do not show this developmental trajectory in their crying exhibit poorer language performance two years later (Wermke & Mende, 2009;Wermke at al., 2007;Armbrüster et al., 2020). ...
Preprint
Full-text available
The human brain continues to mature throughout childhood, making our species particularly susceptible to experience. Given the diversity of music and language around the globe, how these are acquired during childhood is revealing about the feedback loop between our biological predispositions and exposure. Evidence suggests that children begin as generalists and become specialists, with music and language deeply entangled in infancy and modularity emerging over time. In addition, development proceeds along parallel tracks, with comparable cognitive milestones. Although there is a tendency to celebrate our precociousness, it may be that the opposite is true: our unfledged entry into the world affords us the extended time necessary to internalize these products of culture. The present chapter begins by exploring the variety of music and languages around the world. It then tracks developmental milestones from birth throughout childhood, examines linked developmental disorders, and closes with a discussion of open questions and future directions.
... https://orcid.org/0000-0002-3134-950X Notes 1. Prior research has proven that there are differences in cry patterns of children with and without asphyxia (Meghashree and Nataraja, 2019) as well as comparable differences in parental responses to infants' cries (Bornstein et al., 2017). However, research also suggests potential variations in cry melodies (Prochnow et al., 2019). 2. The database contains cry pattern samples from newborns to 6-month-old babies born in Mexico and consists of normal (i.e. ...
Article
Full-text available
This study explores relocated algorithmic governance through a qualitative study of the Ubenwa health app. The Ubenwa, which was developed in Canada based on a dataset of babies from Mexico, is currently being implemented in Nigeria to detect birth asphyxia. The app serves as an ideal case for examining the socio-cultural negotiations involved in re-contextualising algorithmic technology. We conducted in-depth interviews with parents, medical practitioners and data experts in Nigeria; the interviews reveal individuals’ perceptions about algorithmic governance and self-determination. In particular, our study presents people’s insights about (1) relocated algorithms as socially dynamic ‘contextual settings’, (2) the (non)negotiable spaces that these algorithmic solutions potentially create and (3) the general implications of re-contextualising algorithmic governance. This article illustrates that relocated algorithmic solutions are perceived as ‘cosmopolitan data localisms’ that extend the spatial scales and multiply localities rather than as ‘data glocalisation’ or the indigenisation of globally distributed technology.
... The classification of the cry units (CUs) is indeed a quite hard task, both numerically and perceptually, and there are few clinically assessed relationships between melodic shapes and neurological conditions in the newborn [23,27,34,35]. Recently, the study of melodic shapes in the neonatal cry has proven to be an effective tool in characterizing linguistic differences between infants whose mothers have different mother tongues [25,26,39,40]. This aspect is relevant as it allows verifying the newborns' ability to listening to external sounds in the last weeks of gestation, and their learning ability before birth. ...
Article
Objective This paper introduces BioVoice, a user-friendly software tool for the acoustical analysis of the human voice. It estimates more than 20 acoustical parameters with advanced and robust analysis techniques specifically developed for different vocal emissions, from the newborn to the adult and the singer. Methods BioVoice performs both time and frequency analyses, detecting the number, length, and percentage of voiced and unvoiced segments and computing fundamental frequency (F0), formant frequencies (F1-F3), noise level, and jitter. The software tool computes the melodic shape of F0 out of 12 basic shapes and allows performing perceptual analysis in newborn and child voice. In the singing voice case, formants up to F5 are computed as well as the quality ratio and parameters concerning vibrato and its regularity. Colour figures and Excel tables show F0, the spectrogram with formants, voiced segments, and quality ratio. Results Examples of voice analysis in adults, children, newborns, and singers are presented. They show the specific capabilities and the high performance of BioVoice also as compared to another existing software tool. Significance BioVoice is a free user-friendly software tool for voice analysis that implements new estimation techniques. Basic parameters are computed as well as new ones specifically developed for newborn cry and singing voice analysis, not available with current software tools. Conclusions BioVoice is capable to deal with low to high pitched voices implementing dedicated tools. Thanks to its simple and intuitive interface, colour figures and Excel tables, it is a valuable tool suitable also for the inexperienced user.
... There is now compelling evidence that a comprehensive understanding of early prespeech development requires the inclusion of typical mitigated, melodic crying (Borysiak et al., 2016;Fuamenya, Robb, & Wermke, 2015;Prochnow, Erlandsson, Hesse, & Wermke, 2017;Quast et al., 2016;Wermke et al., 2014Wermke et al., , 2007Wermke et al., , 1999Wermke et al., , 2005Wermke, Quast, & Hesse, 2018). The present findings support this assumption by documenting that laryngeal constriction phenomena are apparent in the cry and noncry vocalizations of typically developing infants. ...
Article
Full-text available
Purpose Instances of laryngeal constriction have been noted as a feature of infant vocal development. The purpose of this study was to directly evaluate the developmental occurrence of laryngeal constriction phenomena in infant crying, cooing, and babbling vocalizations. Method The cry and noncry vocalizations of 20 healthy term-born infants between the ages of 1 and 7 months were examined for instances of laryngeal constriction. Approximately 20,000 vocalization samples were acoustically evaluated, applying a combined visual (frequency spectra and melody curves) and auditory analysis; the occurrence of instances of different constriction phenomena was analyzed. Results Laryngeal constrictions were found during the production of cry and noncry vocalizations. The developmental pattern of constrictions for both vocalizations was charac-terized by an increase in constrictions followed by a decrease. During the age period of 3–5 months, when cry and noncry vocalizations were co-occurring, laryngeal constrictions were observed in 14%–22% of both types of vocalizations. An equal percentage of constrictions was found for both vocalizations at 5 months of age. Conclusions The findings confirm that the production of laryngeal constriction is a regularly occurring phenomenon in healthy, normally developing infants' spontaneous crying, cooing, and marginal babbling. The occurrence of constriction in both cry and noncry vocalizations suggests that an infant is exploiting physiological constraints of the sound-generating system for articulatory development during vocal exploration. These results lend support to the notion that the laryngeal articulator is the principal articulator that infants 1st start to control as they test and practice their phonetic production skills from birth through the 1st several months of life.
... The ambient language to which fetuses are exposed in the womb starts to affect their perception of their native language, particularly at a prosodic (melodic) level ( Abboub et al., 2016;Moon et al., 1993). A newborn's cry melody is shaped by the maternal speech melody experienced prenatally ( Mampe et al., 2009;Prochnow et al., 2017;Wermke et al., 2016aWermke et al., , 2016b). Melody development follows an inborn program for generating an in- creasing within-sound complexity and repertoire of complex patterns over the first few months. ...
Article
Contribution to Special Issue on Fast effects of steroids. Human infants are the most proficient of the few vocal learner species. Sharing similar principles in terms of the generation and modification of complex sounds, cross-vocal learner comparisons are a suitable strategy when it comes to better understanding the evolution and mechanisms of auditory-vocal learning in human infants. This approach will also help us to understand sex differences in relation to vocal development towards language, the underlying brain mechanisms thereof and sex-specific hormonal effects. Although we are still far from being capable of discovering the “fast effects of steroids” in human infants, we have identified that peripheral hormones (blood serum) are important regulators of vocal behaviour towards language during a transitory hormone surge (“mini-puberty”) that is comparable in its extent to puberty. This new area of research in human infants provides a promising opportunity to not only better understand early language acquisition from an ontogenetic and phylogenetic perspective, but to also identify reliable clinical risk-markers in infants for the development of later language disorders.
Chapter
In the early decades of the twentieth century, Lev Semenovich Vygotsky, Alexis Nikolaevich Leontiev, and Alexander Romamovich Luria proposed the so-called Cultural-Historical Psychology. The aforementioned authors took on the challenge of building a psychological theory that would overcome naturalizing, a-historical, and idealistic approaches to human development, in order to apprehend it, considering that human minds shape and are shaped by social, cultural, and historical conditions. Led by Vygotsky, the cultural-historical theory, centered on the processes of thought, language, behavior, and learning, brought great advances to psychology, and since then, this theoretical corpus has been expanded by several researchers, mainly in the area of Education. Simultaneously, it was possible to follow the development of cognitive psychology and neuroscience and their advances in the same areas of Vygotsky’s interest. However, there is practically no academic work that connects Vygostky’s theoretical propositions on human development and contemporary evidence in these research fields. Thus, without intending to make a historical rescue, this text will present several and solid connections between cultural-historical theory and recent findings in cognitive psychology and neuroscience. More than that, we will show that much of what was proposed by Vygostky in the 1920s and 1930s is now part of the theoretical framework of several theories on the functioning of the mind and human development. The main aim of this text is to sensitize researchers in the areas of cognitive psychology and neuroscience to study Vygostky because his theory can contribute to current research.KeywordsNeuroscienceCultural-Historical PsychologyLev VygotskyPsychologyHuman development
Article
Objectives Temporal and fundamental frequency (fo) variations in infant cries provide critical insights into the maturity of vocal control and hearing performances. Earlier research has examined the use of vocalisation properties (in addition to hearing tests) to identify infants at risk of hearing impairment. The aim of this study was to determine whether such an approach could be suitable for neonates. Methods To investigate this, we recruited 74 healthy neonates within their first week of life as our participants, assigning them to either a group that passed the ABR-based NHS (PG, N=36) or a group that did not, but were diagnosed as normally hearing in follow-up check at 3 months of life, a so-called false-positive group (NPG, N=36). Spontaneously uttered cries (N = 2,330) were recorded and analysed quantitatively. The duration, minimum, maximum and mean fo, as well as two variability measures (fo range, fo sigma), were calculated for each cry utterance, averaged for individual neonates, and compared between the groups. Results A multiple analysis of variance (MANOVA) revealed no significant effects. This confirms that cry features reflecting vocal control do not differ between healthy neonates with normal hearing, irrespective of the outcome of their initial NHS. Conclusions Healthy neonates who do not pass the NHS but are normal hearing in the follow-up (false positive cases) have the same cry properties as those with normal hearing who do. This is an essential prerequisite to justify the research strategy of incorporating vocal analysis into NHS to complement ABR measures in identifying hearing-impaired newborns.
Chapter
This chapter focuses on the influence of sex hormones on early language processing and production in humans. It summarizes arguments and findings that demonstrate the importance of considering the involvement of hormone-mediated sex differences in any analysis of early vocal and language development from a neurophysiological perspective. Insights into the sex dimorphism of language lateralization, which is already evident in infants, support this approach. The concept of reviewing potential effects of the sex hormones estradiol and testosterone on early vocal and language development is based on recent findings of prepubertal hormone surges during infancy, which are comparable with pubertal effects in terms of their impact on brain development. A comprehensive survey of the research literature related to this newly developing clinical field is accompanied by a focus on existing assumptions on sex hormone-mediated effects on language-relevant neural circuits.
Article
Objective To evaluate the flexibility of respiratory behavior during spontaneous crying using an objective analysis of temporal measures in healthy neonates. Participants A total of 1,375 time intervals, comprising breath cycles related to the spontaneous crying of 72 healthy, full-term neonates (35 females) aged between two and four days, were analyzed quantitatively. Methods Digital recordings (44 kHz, 16 bit) of cries emitted in a spontaneous, pain-free context were obtained at the University Children's Hospital Wurzburg. The amplitude-by-time representation of PRAAT: doing phonetics by computer (38) was used for the manual segmentation of single breath-cycles involving phonation. Cursors were set in these time intervals to mark the duration of inspiratory (IPh) and expiratory phases (EPh), and double-checks were carried out using auditory analyses. A PRAAT script was used to extract temporal features automatically. The only intervals analyzed were those that contained an expiratory cry utterance embedded within preceding and subsequent inspiratory phonation (IP). Beyond the reliable identification of IPh and EPh, this approach also guaranteed inter-individual and inter-utterance homogenization with respect to inspiratory strength and an unconstructed vocal tract. Results Despite the physiological constraints of the neonatal respiratory system, a high degree of flexibility in the ratio of IPh/EPh was observed. This ratio changed hyperbolically (r = 0.71) with breath-cycle duration. Descriptive statistics for all the temporal measures are reported as reference values for future studies. Conclusion The existence of respiratory exploration during the spontaneous crying of healthy neonates is supported by quantitative data. From a clinical perspective, the data demonstrate the presence of a high degree of flexibility in the respiratory behavior, particularly neonates’ control capability with respect to variable cry durations. These data are discussed in relation to future clinical applications.
Article
Recent research studies have shown that since the last trimester of pregnancy human fetuses are able to listen to and possibly memorize auditory stimuli from the external world, both as music and language are concerned. In particular, they exhibit a specific sensitivity to prosodic features such as melody, intensity, and rhythm that are essential for an infant to learn and develop the native language. This paper presents first results concerning the automated mother language identification of a set of about 7500 cry units coming from French, Arabic and Italian mother-tongue healthy full term newborns. Acoustical parameters and 12 different melodic shapes are computed with the BioVoice software tool and their classification is performed with Random Forest and 4 neuro-fuzzy classifiers. Results show up to 95% differences among the three languages.
Article
Full-text available
The paper presents evidence that the intrauterine auditory environment plays a key role in shaping later auditory development. The acoustic environment in utero begins to shape the auditory system much earlier than sensory systems that are not exposed to input until after birth. The effects of prenatal auditory experience can be observed both among fetus through different paradigms and in the new-borns within few hours or days after birth. This manuscript collects a comprehensive snapshot of the work in this research area presenting evidence of a consistent number of papers published in this topic of study. Furthermore, the potential function of learning prenatally is explored in terms of its relevance for perinatal development. So, we describe growing evidence that externally generated sounds and music influence the developing foetus, and argue that such prenatal auditory experience may also set the trajectory for the development.
Article
Full-text available
Objectives: This study examined whether prenatal exposure to either a tonal or a nontonal maternal language affects fundamental frequency (fo) properties in neonatal crying. Study design: This is a population prospective study. Participants: A total of 102 neonates within the first week of life served as the participants. Methods: Spontaneously uttered cries (N = 6480) by Chinese (tonal language group) and German neonates (nontonal group) were quantitatively analyzed. For each cry utterance, mean fo and four characteristic variation measures (fo range, fo fluctuation, pitch sigma, and pitch sigma fluctuation) were calculated, averaged for individual neonates, and compared between groups. Results: A multiple analysis of variance highlighted a significant multivariate effect for language group: Wilks λ = .76, F(6, 95) = 4.96, P < .0001, ηp(2) = .24. Subsequent univariate analyses revealed significant group differences for fo variation measures, with values higher in the tonal language group. The mean fo did not differ between groups. Conclusions: Data regarding fo variation in infant cries have been suggested as providing critical insight into the maturity of neurophysiological vocal control. Our findings, alongside with auditive perception studies, further underscore the assumption of an early shaping effect of maternal speech, particularly fo-based features, on cry features of newborns. Further studies are needed to reexamine this observation and to assess its potential diagnostic relevance.
Article
Full-text available
Objective: Evaluate the fundamental frequency (fo) variabilty of spontaneous cries produced by neonates with tonal (Lamnso) and non-tonal (German) ambient language.Study Design: Populational prospective study.Participants: A total of 21 German infants (10 male) and 21 Cameroon (Nso) infants (10 male) within the first week of life served as participants.Methods: Spontaneously uttered cries by each infant were audio recorded. The cries were acoustically analyzed and measures of fo variability (pitch sigma, fo fluctuation, fo range) were calculated. Cry duration and anthropometric measures were calculated as co-factors.Results: Significant group differences were found for all fo variability measures, whereas somatic measures did not differ. Cry duration also differed significantly between groups. The results were indicative of Cameroon (Nso) infants producing cries with more fo variability compared to German infants.Conclusion: Albeit further studies with a larger sample size are warranted, the data foster previous findings of an early imprinting effect of the ambient maternal language on cry fo.
Article
Full-text available
This study investigated the development of subcortical pitch processing, as reflected by the scalp-recorded frequency-following response, during early infancy. Thirteen Chinese infants who were born and raised in Mandarin-speaking households were recruited to partake in this study. Through a prospective-longitudinal study design, infants were tested twice: at 1–3 days after birth and at three months of age. A set of four contrastive Mandarin pitch contours were used to elicit frequency-following responses. Frequency Error and Pitch Strength were derived to represent the accuracy and magnitude of the elicited responses. Paired-samples t tests were conducted and demonstrated a significant decrease in Frequency Error and a significant increase in Pitch Strength at three months of age compared to 1–3 days after birth. Results indicated the developmental trajectory of subcortical pitch processing during the first three months of life.
Article
Full-text available
The study of infant music perception began in the 1970s-a time when young infants were considered incapable of holistic processing of auditory sequences. These limitations were reconsidered with the demonstration of infants' configural processing of pitch and timing patterns, which presaged the vibrant field of study that unfolded over subsequent decades. The 1980s revealed the salience of melodic contour for infants as well as adult-like processing of pitch and timing patterns. The 1990s shed new light on intervals and scales, uncovering situations in which infant listeners outperformed their adult counterparts. Scholars in the new millennium have documented a number of factors that influence rhythm perception in infancy, including incidental exposure to music and the experience of movement during music listening. In addition, brain-based measures are shedding light on the musical sensitivities of newborn infants. In sum, the conception of infants vis-a-vis music has changed substantially over the past four decades. Moreover, research in this realm is influencing ongoing debate about the nature and origins of music.
Article
Full-text available
Significance Newborns can hear their mother’s voice and heartbeat sounds before birth. However, it is unknown whether, how early, and to what extent the newborn's brain is shaped by exposure to such maternal sounds. This study provides evidence for experience-dependent plasticity in the auditory cortex in preterm newborns exposed to authentic recordings of maternal sounds before full-term brain maturation. We demonstrate that the auditory cortex is more adaptive to womb-like maternal sounds than to environmental noise. Results are supported by the biological fact that maternal sounds would otherwise be present in utero had the baby not been born prematurely. We theorize that exposure to maternal sounds may provide newborns with the auditory fitness necessary to shape the brain for hearing and language development.
Article
Full-text available
Musicality can be defined as a natural, spontaneously developing trait based on and constrained by biology and cognition. Music, by contrast, can be defined as a social and cultural construct based on that very musicality. One critical challenge is to delineate the constituent elements of musicality. What biological and cognitive mechanisms are essential for perceiving, appreciating and making music? Progress in understanding the evolution of music cognition depends upon adequate characterization of the constituent mechanisms of musicality and the extent to which they are present in non-human species. We argue for the importance of identifying these mechanisms and delineating their functions and developmental course, as well as suggesting effective means of studying them in human and non-human animals. It is virtually impossible to underpin the evolutionary role of musicality as a whole, but a multicomponent perspective on musicality that emphasizes its constituent capacities, development and neural cognitive specificity is an excellent starting point for a research programme aimed at illuminating the origins and evolution of musical behaviour as an autonomous trait. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Article
Full-text available
Crying is the earliest sound production of human infants on their long way toward language. Here, we argue that infants' early crying contains melodic constituents for both musical and prosodic structures. This view is based on our findings that cry melodies become increasingly complex during the first months of life and, that complex cry melodies are composed of shape-specific melody arcs. We found that cry melodies contain frequency ratios that show a certain preference of musical intervals. We also observed that young infants are capable of uttering shape-similar melody arcs at different frequency levels, that means they have an aptitude for frequency transposition from birth on. Moreover, we observed that the production of phonatory breaks within single expiratory sounds generates rhythmical elements and points to a flexible time organization. Our data support the view that in crying elementary constituents of both musicality and language faculty are unfolding. The results may elucidate the relation between emotionally charged sounds and music respectively language and suggest direction for further research. © 2009 by ESCOM European Society for the Cognitive Sciences of Music.
Article
Full-text available
To evaluate the developmental occurrence of subharmonic (SH) and noise (N) phenomena and to quantify their extent in the spontaneous cries of healthy infants across the first 3 months. Populational prospective study. Spontaneous elicited cries from 20 infants (10 male) were repeatedly recorded across the first 3 months of life. Frequency spectra and waveforms were used to identify the occurrence of SH and N and to measure the percentage of their combined occurrence in overall monthly crying behavior (expressed as a quantitative noise index [NI]). SH and N episodes were prevalent in the cries of young infants during the first 2 months, being present in more than 50% of the recorded cries. A developmental trend was evident in NI with a significant decrease across the 3-month period. A corresponding significant increase in mean duration of single cries was observed during the same period. SH and phonatory noise are regularly occurring phenomena in healthy infant crying because of the characteristics of pediatric larynx anatomy and neurophysiological control mechanisms underlying cry production. The reduction in NI appears to correspond with the development of an infant's crying complexity. The utility of NI as a metric of cry phonatory behavior should next be validated on infant groups with known or suspected health problems. Copyright © 2014 The Voice Foundation. Published by Elsevier Inc. All rights reserved.
Article
Full-text available
The specific impact of sex hormones on brain development and acoustic communication is known from animal models. Sex steroid hormones secreted during early development play an essential role in hemispheric organization and the functional lateralization of the brain, e.g. language. In animals, these hormones are well-known regulators of vocal motor behaviour. Here, the association between melody properties of infants' sounds and serum concentrations of sex steroids was investigated. Spontaneous crying was sampled in 18 healthy infants, averaging two samples taken at four and eight weeks, respectively. Blood samples were taken within a day of the crying samples. The fundamental frequency contour (melody) was analysed quantitatively and the infants' frequency modulation skills expressed by a melody complexity index (MCI). These skills provide prosodic primitives for later language. A hierarchical, multiple regression approach revealed a significant, robust relationship between the individual MCIs and the unbound, bioactive fraction of oestradiol at four weeks as well as with the four-to-eight-week difference in androstenedione. No robust relationship was found between the MCI and testosterone. Our findings suggest that oestradiol may have effects on the development and function of the auditory-vocal system in human infants that are as powerful as those in vocal-learning animals.
Article
Full-text available
We investigated the neural correlates induced by prenatal exposure to melodies using brains' event-related potentials (ERPs). During the last trimester of pregnancy, the mothers in the learning group played the 'Twinkle twinkle little star' -melody 5 times per week. After birth and again at the age of 4 months, we played the infants a modified melody in which some of the notes were changed while ERPs to unchanged and changed notes were recorded. The ERPs were also recorded from a control group, who received no prenatal stimulation. Both at birth and at the age of 4 months, infants in the learning group had stronger ERPs to the unchanged notes than the control group. Furthermore, the ERP amplitudes to the changed and unchanged notes at birth were correlated with the amount of prenatal exposure. Our results show that extensive prenatal exposure to a melody induces neural representations that last for several months.
Article
Full-text available
The paper will draw on ethnomusicological, cognitive and neuroscientific evidence in suggesting that music and language constitute complementary components of the human communicative toolkit. It will start by outlining an operational definition of music as a mode of social interaction in terms of its generic, cross-cultural properties that facilitates comparison with language as a universal human faculty. It will argue that, despite the fact that music appears much more heterogeneous and differentiated in function from culture to culture than does language, music possesses common attributes across cultures: it exploits the human capacity to entrain to external (particularly social) stimuli, and presents a rich set of semantic fields while under-determining meaning. While language is held to possess both combinatoriality and semanticity, music is often claimed to be combinatorial but to lack semanticity. This paper will argue that music has semanticity, but that this semanticity is adapted for a different function from that of language. Music exploits the human capacity for entrainment, increasing the likelihood that participants will experience a sense of ‘shared intentionality’. It presents the characteristics of an ‘honest signal’ while under-specifying goals in ways that permit individuals to interact even while holding to personal interpretations of goals and meanings that may actually be in conflict. Music allows participants to explore the prospective consequences of their actions and attitudes towards others within a temporal framework that promotes the alignment of participants’ sense of goals. As a generic human faculty music thus provides a medium that is adapted to situations of social uncertainty, a medium by means of which a capacity for flexible social interaction can be explored and reinforced. It will be argued that a faculty for music is likely to have been exaptive in the evolution of the human capacity for complex social interaction.
Article
Full-text available
Al sinds onze babytijd hebben wij, mensen, een grote perceptuele gevoeligheid voor zowel de melodische, ritmische als dynamische aspecten van spraak en muziek. Het gaat, voor zover we nu weten, om een uniek menselijke aanleg voor het waarnemen, interpreteren en waarderen van muziek, nog voordat er een woord gesproken, of zelfs maar bedacht is. Het is het preverbale en preletter stadium waar het muzikale luisteren vol van is. Muziek speelt op een intrigerende manier met ons gehoor, ons geheugen, onze emoties en onze verwachtingen. Als luisteraar zijn we ons er vaak niet van bewust, maar we spelen een actieve rol bij wat muziek spannend, troostend of opwindend maakt, omdat luisteren zich niet afspeelt in de buitenwereld van de klinkende muziek, maar in de stille binnenwereld van ons hoofd en onze hersenen.
Article
Full-text available
Some scholars consider music to exemplify the classic criteria for a complex human adaptation, including universality, orderly development, and special-purpose cortical processes. The present account focuses on processing predispositions for music. The early appearance of receptive musical skills, well before they have obvious utility, is consistent with their proposed status as predispositions. Infants' processing of musical or music-like patterns is much like that of adults. In the early months of life, infants engage in relational processing of pitch and temporal patterns. They recognize a melody when its pitch level is shifted upward or downward, provided the relations between tones are preserved. They also recognize a tone sequence when the tempo is altered so long as the relative durations remain unchanged. Melodic contour seems to be the most salient feature of melodies for infant listeners. However, infants can detect interval changes when the component tones are related by small-integer frequency ratios. They also show enhanced processing for scales with unequal steps and for metric rhythms. Mothers sing regularly to infants, doing so in a distinctive manner marked by high pitch, slow tempo, and emotional expressiveness. The pitch and tempo of mothers' songs are unusually stable over extended periods. Infant listeners prefer the maternal singing style to the usual style of singing, and they are more attentive to maternal singing than to maternal speech. Maternal singing also has a moderating effect on infant arousal. The implications of these findings for the origins of music are discussed.
Article
Full-text available
Cardiac responses of 36- to 39-week-old (GA) fetuses were tested with a no-delay pulsed stimulation paradigm while exhibiting a low heart rate (HR) variability (the HR pattern recorded when fetuses are in the 1f behavioral state). We examined whether fetuses could discriminate between two low-pitched piano notes, D4 (F(0) = 292 Hz/292-1800 Hz) and C5 (F(0) = 518 Hz/518-300 Hz). Seventy percent of all fetuses reacted to the onset of the first note (D4 or C5) with the expected cardiac deceleration. After heart rate returned to baseline, the note was changed (to C5 or D4, respectively). Ninety percent of the fetuses who reacted to the note switch did it with another cardiac deceleration. Control fetuses, for whom the first note did not change, displayed few cardiac decelerations. Thus, fetuses detected and responded to the pulsed presentation of a note and its subsequent change regardless of which note was presented first. Because perceived loudness (for adults) of the notes was controlled, it seems that the note's differences in F(0) and frequency band were relevant for detecting the change. Fetuses' ability to discriminate between spectra that lay within the narrow range of voice F(0) and F(1) formants may play an important role in the earliest developmental stages of speech perception.
Article
Full-text available
Human hearing develops progressively during the last trimester of gestation. Near-term fetuses can discriminate acoustic features, such as frequencies and spectra, and process complex auditory streams. Fetal and neonatal studies show that they can remember frequently recurring sounds. However, existing data can only show retention intervals up to several days after birth. Here we show that auditory memories can last at least six weeks. Experimental fetuses were given precisely controlled exposure to a descending piano melody twice daily during the 35(th), 36(th), and 37(th) weeks of gestation. Six weeks later we assessed the cardiac responses of 25 exposed infants and 25 naive control infants, while in quiet sleep, to the descending melody and to an ascending control piano melody. The melodies had precisely inverse contours, but similar spectra, identical duration, tempo and rhythm, thus, almost identical amplitude envelopes. All infants displayed a significant heart rate change. In exposed infants, the descending melody evoked a cardiac deceleration that was twice larger than the decelerations elicited by the ascending melody and by both melodies in control infants. Thus, 3-weeks of prenatal exposure to a specific melodic contour affects infants 'auditory processing' or perception, i.e., impacts the autonomic nervous system at least six weeks later, when infants are 1-month old. Our results extend the retention interval over which a prenatally acquired memory of a specific sound stream can be observed from 3-4 days to six weeks. The long-term memory for the descending melody is interpreted in terms of enduring neurophysiological tuning and its significance for the developmental psychobiology of attention and perception, including early speech perception, is discussed.
Article
This article focuses on musically relevant psychological aspects of prenatal development: the development of perception, cognition, and emotion; the relationships between them; and the musical and musicological implications of those relationships. It begins by surveying relevant foetal sensory abilities: hearing, the vestibular sense of balance and acceleration, and the proprioceptive sense of body orientation and movement. All those senses are relevant for musical development, since in all known cultures music is inseparable from bodily movement and gesture, whether real or implied. The article then considers what sounds and other stimuli are available to the foetus: what patterns are the earliest to be perceptually learnt? It examines psychological and philosophical issues of foetal attention, 'consciousness', learning, and memory. The article closes with speculations about the possible role of prenatal development in the phylogeny of musical behaviours.
Article
Introduction: The developing fetus relies on the maternal blood supply to provide the choline it requires for making membrane lipids, synthesizing acetylcholine, and performing important methylation reactions. It is vital, therefore, that the placenta is efficient at transporting choline from the maternal to the fetal circulation. Although choline transporters have been found in term placenta samples, little is known about what cell types express specific choline transporters and how expression of the transporters may change over gestation. The objective of this study was to characterize choline transporter expression levels and localization in the human placenta throughout placental development. Methods: We analyzed CTL1 and -2 expression over gestation in human placental biopsies from 6 to 40 weeks gestation (n = 6-10 per gestational window) by immunoblot analysis. To determine the cellular expression pattern of the choline transporters throughout gestation, immunofluorescence analysis was then performed. Results: Both CTL1 and CTL2 were expressed in the chorionic villi from 6 weeks gestation to term. Labor did not alter expression levels of either transporter. CTL1 localized to the syncytial trophoblasts and the endothelium of the fetal vasculature within the chorionic villous structure. CTL2 localized mainly to the stroma early in gestation and by the second trimester co-localized with CTL1 at the fetal vasculature. Discussion: The differential expression pattern of CTL1 and CTL2 suggests that CTL1 is the key transporter involved in choline transport from maternal circulation and both transporters are likely involved in stromal and endothelial cell choline transport.
Article
The confirmation of techniques used to stimulate the fetus in utero led us to an exhaustive study of spontaneous intra-uterine noises and those modified by different stimuli (while evaluating the diagnosis of fetal distress with flat cardiac rhythm during pregnancy). By the use of elaborate techniques simultaneously we could appreciate the signal and analyse the information obtained and we were able to show incontrovertibly that over and above the basal sound that the fetus could hear it could appreciate the mother's voice and other voices, which were perfectly audible to it but lacking in tone because the sharp frequencies were absorbed. The first results of stimulation using calibrated sounds seem to confirm that the fetus does not react to a sound stimulus when it is in a state of fetal distress, but when there is no fetal distress it does react immediately by change in the heart rate, often associated with movements. this technique, incidentally, can be used to show if the fetus can hear well in the uterus in cases of family deafness or of rubella associated with pregnancy.
Article
This chapter is “typological” in two senses: Whereas the first section considers some of the sounds and sound patterns of Swedish from a universal-typological point of view, the second section discusses the considerable phonetic variability observed across the various dialects of the language. It is argued that, with some exceptions, Swedish is typologically fairly mainstream. Exceptions concern particularly the inventory of non-back rounded vowels, voiceless fricatives and partly prosody. The Swedish dialects are found to contain several distinct consonant and vowel types that are not encountered in Standard varieties. It is also found that the intonational structure of the Swedish dialects is fairly complex and diversified. The third section concludes the chapter with some informal observations of possible sound changes in progress.
Book
In this book, Richard Wiese provides the most complete and up-to-date description presently available of the phonology of German. Starting with a presentation of phonemes and their features, the author then describes in detail syllables, higher prosodic units, phonological conditions of word formation, patterns of redundancy for features, phonological rules, and rules of stress for words and phrases, giving particular emphasis to the interaction of morphology and phonology. He focuses on the present-day standard language, but includes occasional discussion of other variants and registers. The study is informed by recent models in phonological theory, and for phonologists and morphologists it provides both a rich source of material and a critical discussion of current problems and their solutions. It also serves as an introduction to the sound system of German for the non-specialist reader.
Article
Existing theories of the origins of music and religion fail to account directly and convincingly for their universal emotional power and behavioural costliness. The theory of prenatal origins is based on empirically observable phenomena and involves prenatal classical conditioning, postnatal operant conditioning and the adaptive value of mother-infant bonding. The human fetus can perceive sound and acceleration from gestational week 20. The most salient sounds for the fetus are internal to the mother's body and associated with vocalisation, blood circulation, impacts (footfalls), and digestion. The protomusical sensitivity of infants may be based on prenatal associations between the mother's changing physical and emotional state and concomitant changes in both hormone levels in the placental blood and prenatally audible sound/movement patterns. Protomusical aspects of motherese, play and ritual may have emerged during a multigenerational process of operational conditioning on the basis of prenatally established associations among sound, movement and emotion. The infant's multimodal cognitive representation of its mother (mother schema) begins to develop before birth and may underlie music's personal qualities, religion's supernatural agents, and the link between the two. Prenatal theory can contribute to an explanation of musical universals such as specific features of rhythm and melody and associations between music and body movement, as well as universal commonalities of musical and religious behaviour and experience such as meaning, fulfilment, and altered states of consciousness.
Article
Musicality in human motives, the psycho-biological source of music, is described as a talent inherent in the unique way human beings move, and hence experience their world, their bodies and one another. It originates in the brain images of moving and feeling that generate and guide behaviour in time, with goal-defining purposefulness and creativity. Intelligent perception, cognition and learning, and the potentiality for immediate sympathy between humans for expressions of intrinsic motives in narrative form (linguistic and non-linguistic), depend on this spontaneous, self-regulating brain activity. It is proposed that evolution of human bipedal locomotion and the pressure of social intelligence set free a new poly-rhythmia of motive processes, and that these generate fugal complexes of the Intrinsic Motive Pulse (IMP), with radical consequences for human imagination, thinking, remembering and communicating. Gestural mimesis and rhythmic narrative expression of purposes and images of awareness, regulated by, and regulating, dynamic emotional processes, form the foundations of human intersubjectivity, and of musicality. Acquired musical skill and the conventions of musical culture are animated from this core process in the human mind. Research on the dynamics of protoconversations and musical games with infants elucidates the rhythmic and prosodic foundations of sympathetic engagement in expressive exchanges. Developments in the first year prove the importance of the impulses of natural musicality in the emergence of cooperative awareness, and show how shared participation in the expressive phrases and emotional transformations of vocal games can facilitate not only imitation of speech, but interest in all shared meanings, or conventional uses, of objects and actions. Disturbances of early communication attributable to emotional unavailability of a depressed mother, or those due to sensory, motor or emotional handicap that causes a child to fail to react in an expected normal way, both confirm the crucial function in the development of intelligence and personality of sympathetic motives shared between adult and child in a secure and affectionate relationship, and offer a way of promoting development by supporting these motives. These facts establish that the parameters of musicality are intrinsically determined in the brain, or innate, and necessary for human development. Through their effects in emotional integration and the collaborative learning that leads to mastery of cultural knowledge, cultural skills, and language, they express the essential generator of human cognitive development.
Article
Significance Learning, the foundation of adaptive and intelligent behavior, is based on changes in neural assemblies and reflected by the modulation of electric brain responses. In infancy, long-term memory traces are formed by auditory learning, improving discrimination skills, in particular those relevant for speech perception and understanding. Here we show direct neural evidence that neural memory traces are formed by auditory learning prior to birth. Our findings indicate that prenatal experiences have a remarkable influence on the brain’s auditory discrimination accuracy, which may support, for example, language acquisition during infancy. Consequently, our results also imply that it might be possible to support early auditory development and potentially compensate for difficulties of genetic nature, such as language impairment or dyslexia.
Article
In the current resurgence of interest in the biological basis of animal behavior and social organization, the ideas and questions pursued by Charles Darwin remain fresh and insightful. This is especially true of The Descent of Man and Selection in Relation to Sex, Darwin's second most important work. This edition is a facsimile reprint of the first printing of the first edition (1871), not previously available in paperback. The work is divided into two parts. Part One marshals behavioral and morphological evidence to argue that humans evolved from other animals. Darwin shoes that human mental and emotional capacities, far from making human beings unique, are evidence of an animal origin and evolutionary development. Part Two is an extended discussion of the differences between the sexes of many species and how they arose as a result of selection. Here Darwin lays the foundation for much contemporary research by arguing that many characteristics of animals have evolved not in response to the selective pressures exerted by their physical and biological environment, but rather to confer an advantage in sexual competition. These two themes are drawn together in two final chapters on the role of sexual selection in humans. In their Introduction, Professors Bonner and May discuss the place of The Descent in its own time and relation to current work in biology and other disciplines.
Article
Using examples from a wide variety of languages, this book reveals why speakers vary their pitch, what these variations mean, and how they are integrated into our grammars. All languages use modulations in pitch to form utterances. Pitch modulation encodes lexical “tone” to signal boundaries between morphemes or words, and encodes “intonation” to give words and sentences an additional meaning that isn’t part of their original sense. © Carlos Gussenhoven 2004 and Cambridge University Press, 2010.
Article
Reviews studies focusing on (1) musical elements of vocalization and the evolution and ontogeny of language and (2) the infant's earliest musical experience (baby talk). It is concluded that there is a gap in research concerning the directions of musicology interests, implications of biological aspects, and their relation to the development of integrative competence. A pilot observation of the author's infant daughter illustrates aspects of dyadic infant–adult communication. In general, rhythmic patterns appear quite early, first as by-products of autonomic or other motor behaviors of a rhythmical character, and later as vocal products. The infant's musical rhythm seems to be closer to folk music or jazz improvisation than classical Western music. The most advanced vocal performance of musical capabilities in children under 2 yrs of age is the capability to learn to sing a song. The significance of musical elements in the development of integrative processes is discussed. (7 p ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Several cries of newborn infants are analyzed using computer spectrograms and methods from nonlinear dynamics. A rich variety of bifurcations (e.g. period doubling) and episodes of irregular behaviour are observed. Poincaré sections and the analysis of the underlying attractor suggest that these noise-like episodes are low-dimensional deterministic chaos. Possible implications for the very early diagnosis of brain disorders are discussed.
Article
Four-day-old French and 2-month-old American infants distinguish utterances in their native languages from those of another language. In contrast, neither group gave evidence of distinguishing utterances from two foreign languages. A series of control experiments confirmed that the ability to distinguish utterances from two different languages appears to depend upon some familiarity with at least one of the two languages. Finally, two experiments with low-pass-filtered versions of the samples replicated the main findings of discrimination of the native language utterances. These latter results suggest that the basis for classifying utterances from the native language may be provided by prosodic cues.RésuméDeux groupes de bébés de communautés linguistiques différentes ont été testés sur leurs capacités à discriminer des séquences de discours spontané prononcées par un locuteur bilingue en deux langues différentes. Des nouveau-nés des quatre jours, français, sont capables de discriminer des séquences en français de séquences similaires en russe. Des nourrisson américains de deux mois ont manifesté un comportement similaire en présence de séquences en anglais et en italien. Cependant aucun groupe d'enfants ne montre de réponse de discrimination pour des séquences extraites de deux langues étrangéres (français, russe pour les enfants américains; anglais, italien pour les nouveau-nés français). Ceci est également le cas pour des nouveau-nés étrangers nés en France, en présence d'énoncés en français et en russe. Ainsi pour discriminer des énoncés de deux langues différentes, une certaine familiarité avec l'une d'entre elles semble nécessaire. Enfin les nouveau-nés et les nourrissons ont également montré des réactions de discrimination pour des versions filtrées des énoncés. Ces derniers résultats suggérent que les enfants pourraient classer les énoncés comme appartenant à leur langue maternelle sur la base d'indices prosodiques.
Article
Newborn infants whose mothers were monolingual speakers of Spanish or English were tested with audio recordings of female strangers speaking either Spanish or English. Infant sucking controlled the presentation of auditory stimuli. Infants activated recordings of their native language for longer periods than the foreign language.
Article
Pregnant women recited a particular speech passage aloud each day during their last 6 weeks of pregnancy. Their newborns were tested with an operant-choice procedure to determine whether the sounds of the recited passage were more reinforcing than the sounds of a novel passage. The previously recited passage was more reinforcing. The reinforcing value of the two passages did not differ for a matched group of control subjects. Thus, third-trimester fetuses experience their mothers' speech sounds and that prenatal auditory experience can influence postnatal auditory preferences.
Article
This study uses tonal alignment and other analyses to examine the structure of French intonational rises and intonational phonology more generally. I argue that the early rise and the late rise of the French accentual phrase (AP) are structurally different, that the former is a bitonal phrase accent and the latter a bitonal pitch accent. The late rise does not share all the characteristics typically associated with pitch accents, a finding discussed in relation to cross-linguistic distinctions between pitch accents and edge tones. Phrase length, expressed in number of syllables or in clock time, is the best predictor of the realization of the early rise, and thus of two-rise (LHLH) APs. I propose that the early L is edge-seeking—it seeks an association to the beginning edge of the first content word syllable of the AP and an optional association to the edge of an earlier syllable, which is often, but not always, the first syllable of the AP. For both rises, only one end point is anchored to a segmental landmark (the L beginning of the early rise; the H end of the late rise). The French data thus provide evidence that the strong segmental anchoring hypothesis, in which both ends of rises have anchor points, cannot be generalized to all spoken languages.
Article
With the high-amplitude sucking procedure, newborns were presented with two lists of phonetically varied Japanese words differing in pitch contour. Discrimination of the lists was found, thus indicating that newborns are able to extract pitch contour information at the word level.
Article
Cross-language studies, as reflected by the scalp-recorded frequency-following response (FFR) to voice pitch, have shown the influence of dominant linguistic environments on the encoding of voice pitch at the brainstem level in normal-hearing adults. Research questions that remained unanswered included the characteristics of the FFR to voice pitch in neonates during their immediate postnatal period and the relative contributions of the biological capacities present at birth versus the influence of the listener's postnatal linguistic experience. The purpose of this study was to investigate the characteristics of FFR to voice pitch in neonates during their first few days of life and to examine the relative contributions of the "biological capacity" versus "linguistic experience" influences on pitch processing in the human brainstem. Twelve American neonates (five males, 1-3 days old) and 12 Chinese neonates (seven males, 1-3 days old) were recruited to examine the characteristics of the FFRs during their immediate postnatal days of life. Twelve American adults (three males; age: mean ± SD = 24.6 ± 3.0 yr) and 12 Chinese adults (six males; age: mean ± SD = 25.3 ± 2.6 yr) were also recruited to determine the relative contributions of biological and linguistic influences. A Chinese monosyllable that mimics the English vowel /i/ with a rising pitch (117-166 Hz) was used to elicit the FFR to voice pitch in all participants. Two-way analysis of variance (i.e., the language [English versus Chinese] and age [neonate versus adult] factors) showed a significant difference in Pitch Strength for language (p = 0.035, F = 4.716). A post hoc Tukey-Kramer analysis further demonstrated that Chinese adults had significantly larger Pitch Strength values than Chinese neonates (p = 0.024). This finding, coupled with the fact that American neonates and American adults had comparable Pitch Strength values, supported the linguistic experience model. On the other hand, Pitch Strength obtained from the American neonates, American adults, and Chinese neonates were not significantly different from each other, supporting the biological capacity model. This study demonstrated an early maturation of voice-pitch processing in neonates starting from 1 to 3 days after birth and a significant effect of linguistic experience on the neural processing of voice pitch at the brainstem level. These findings provide a significant conceptual advancement and a basis for further examination of developmental maturation of subcortical representation of speech features, such as pitch, timing, and harmonics. These findings can also be used to help identify neonates at risk for delays in voice-pitch perception and provide new directions for preventive and therapeutic interventions for patients with central auditory processing deficits, hearing loss, and other types of communication disorders.