ChapterPDF Available

Emotional communication in monkeys: Music to their ears?

Authors:

Abstract

Why do we believe to understand animal voices such as whining or aggressive barking of our dogs, the longing meows of our cats? Why do we frequently assess deep voices as dominant and high voices as submissive. Are there universal principles governing our own communication system? Can we even see how close animals are related to us by constructing an evolutionary tree based on similarities and dissimilarities in acoustic signaling? Research on the role of emotions in acoustic communication and its evolution was neglected for a long time. When we infect others with our laugh, soothe a crying baby with a lullaby or get goose bumps listening to classical music, we are barely aware of the complex processes upon which this behavior is based. It is not facial expressions or body language that is affecting us, but sound. They are present in music and speech as “emotional prosody” and allow us to communicate not only verbally but also emotionally. In this book we will demonstrate new and surprising insights how acoustically conveyed emotions are generated and processed in animal and man. We will demonstrate why acoustic communication of emotions are of paramount importance and essential for communication across all mammal species and human cultures.
... Anger is conveyed by an increase in fundamental frequency and by higher intensity (amplitude), and fear is shown with an increase in fundamental frequency, many high-frequency components, and a faster rate of articulation. Snowdon and Teie (2013) hypothesized that harmonic structures and pure tones would be associated with positive states, whereas dissonant (or noisy) structures would be associated with aggression, fear, and defense. Staccato calls would be arousing, whereas legato notes would be calming. ...
... Morton evaluated call structures in fear and aggressive contexts in a variety of bird and mammal species and suggested that high-pitched, narrow-band, legato calls were used in fear contexts and that low-pitched, broad band (or noisy) calls signaled aggression. Snowdon and Teie (2013) applied their framework of emotional structures in music to the calls of cotton-top tamarins. Recordings of spontaneous calls were presented to musicians, who evaluated the timbre, tempo, rate of articulation, and pitch of calls without knowing the context in which the calls were given. ...
Article
Full-text available
There have been many attempts to discuss the evolutionary origins of music. We review theories of music origins and take the perspective that music is originally derived from emotional signals. We show that music has adaptive value through emotional contagion, social cohesion, and improved well-being. We trace the roots of music through the emotional signals of other species suggesting that the emotional aspects of music have a long evolutionary history. We show how music and speech are closely interlinked with the musical aspects of speech conveying emotional information. We describe acoustic structures that communicate emotion in music and present evidence that these emotional features are widespread among humans and also function to induce emotions in animals. Similar acoustic structures are present in the emotional signals of nonhuman animals. We conclude with a discussion of music designed specifically to induce emotional states in animals. © 2015 Elsevier B.V. All rights reserved.
... Thus, the specific structure of the music to be used must be chosen to match the goals of those working with animals. If one is using music that is within the perceptual range of the species and music that has the specific structural features that are predicted to induce the desired behavior, then music may be used successfully [84]. In the next section, I describe research on music and behavior of animals that incorporated these points into the experimental design. ...
Article
Full-text available
Playing music or natural sounds to animals in human care is thought to have beneficial effects. An analysis of published papers on the use of human-based music with animals demonstrates a variety of different results even within the same species. These mixed results suggest the value of tailoring music to the sensory systems of the species involved and in selecting musical structures that are likely to produce the desired effects. I provide a conceptual framework based on the combined knowledge of the natural communication system of a species coupled with musical structures known to differentially influence emotional states, e.g., calming an agitated animal versus stimulating a lethargic animal. This new concept of animal-based music, which is based on understanding animal communication, will lead to more consistent and specific effects of music. Knowledge and appropriate use of animal-based music are important in future research and applications if we are to improve the well-being of animals that are dependent upon human care for their survival.
Article
Full-text available
This paper presents a new line of inquiry into when and how music as a semiotic system was born. Eleven principal expressive aspects of music each contains specific structural patterns whose configuration signifies a certain affective state. This distinguishes the tonal organization of music from the phonetic and prosodic organization of natural languages and animal communication. The question of music’s origin can therefore be answered by establishing the point in human history at which all eleven expressive aspects might have been abstracted from the instinct-driven primate calls and used to express human psycho-emotional states. Etic analysis of acoustic parameters is the prime means of cross-examination of the typical patterns of expression of the basic emotions in human music versus animal vocal communication. A new method of such analysis is proposed here. Formation of such expressive aspects as meter, tempo, melodic intervals, and articulation can be explained by the influence of bipedal locomotion, breathing cycle, and heartbeat, long before Homo sapiens. However, two aspects, rhythm and melodic contour, most crucial for music as we know it, lack proxies in the Paleolithic lifestyle. The available ethnographic and developmental data leads one to believe that rhythmic and directional patterns of melody became involved in conveying emotion-related information in the process of frequent switching from one call-type to another within the limited repertory of calls. Such calls are usually adopted for the ongoing caretaking of human youngsters and domestic animals. The efficacy of rhythm and pitch contour in affective communication must have been spontaneously discovered in new important cultural activities. The most likely scenario for music to have become fully semiotically functional and to have spread wide enough to avoid extinctions is the formation of cross-specific communication between humans and domesticated animals during the Neolithic demographic explosion and the subsequent cultural revolution. Changes in distance during such communication must have promoted the integration between different expressive aspects and generated the basic musical grammar. The model of such communication can be found in the surviving tradition of Scandinavian pastoral music - kulning. This article discusses the most likely ways in which such music evolved.
Article
Pitch syntax is an important part of musical syntax. It is a complex hierarchical system that involves generative production and perception based on pitch. Because hierarchical systems are also present in language grammar, the processing of a pitch hierarchy is predominantly explained by the activity of cognitive mechanisms that are not solely specific to music. However, in contrast to the processing of language grammar, which is mainly cognitive in nature, the processing of pitch syntax includes subtle emotional sensations that are often described in terms of tension and resolution or instability and stability. This difference suggests that the very nature of pitch syntax may be evolutionarily older than grammar in language, and has served another adaptive function. The aim of this paper is to indicate that the recognition of pitch structure may be a separate ability, rather than merely being part of general syntactic processing. It is also proposed that pitch syntax has evolved as a specific tool for social bonding in which subtle emotions of tension and resolution are indications of mutual trust. From this perspective, it is considered that musical pitch started to act as a medium of communication by the means of spectral synchronization between the brains of hominins. Pitch syntax facilitated spectral synchronization between performers of a well-established, enduring, communal ritual and in this way increased social cohesion. This process led to the evolution of new cortico-subcortical pathways that enabled the implicit learning of pitch hierarchy and the intuitive use of pitch structure in music before language, as we know it now, began.
Chapter
This chapter focuses on the informal learning opportunities that arise from environmental enrichment and what their consequences are for the animal. Environmental enrichment typically involves the addition of novel stimuli to a captive animal's environment in an attempt to improve animal welfare for example, the provision of toys to an enclosure. All of social, occupation or cognitive, physical, sensory, and nutritional categories of environmental enrichment if managed properly can provide informal learning opportunities for animals. The arrival of internet video calling has created a number of extremely interesting social enrichment opportunities; for example, the ability of animals of the same species to interact visually and auditory in a remote manner. In the case of cognitive enrichment, food is often used to lure the animal into using the enrichment; it is then less clear whether the primary reinforcement is the food or the learning opportunity.
Article
Full-text available
Recent work has identified the physical features of smiles that accomplish three tasks fundamental to human social living: rewarding behavior, establishing and managing affiliative bonds, and negotiating social status. The current work extends the social functional account to laughter. Participants (N = 762) rated the degree to which reward, affiliation, or dominance (between-subjects) was conveyed by 400 laughter samples acquired from a commercial sound effects website. Inclusion of a fourth rating dimension, spontaneity, allowed us to situate the current approach in the context of existing laughter research, which emphasizes the distinction between spontaneous and volitional laughter. We used 11 acoustic properties extracted from the laugh samples to predict participants’ ratings. Actor sex moderated, and sometimes even reversed, the relation between acoustics and participants’ judgments. Spontaneous laughter appears to serve the reward function in the current framework, as similar acoustic properties guided perceiver judgments of spontaneity and reward: reduced voicing and increased pitch, increased duration for female actors, and increased pitch slope, center of gravity, first formant, and noisiness for male actors. Affiliation ratings diverged from reward in their sex-dependent relationship to intensity and, for females, reduced pitch range and raised second formant. Dominance displayed the most distinct pattern of acoustic predictors, including increased pitch range, reduced second formant in females, and decreased pitch variability in males. We relate the current findings to existing findings on laughter and human and non-human vocalizations, concluding laughter can signal much more that felt or faked amusement.
Data
Full-text available
Article
Full-text available
Although the idea that pulse in music may be related to human pulse is ancient and has recently been promoted by researchers (Parncutt, 2006; Snowdon and Teie, 2010), there has been no ordered delineation of the characteristics of music that are based on the sounds of the womb. I describe features of music that are based on sounds that are present in the womb: tempo of pulse (pulse is understood as the regular, underlying beat that defines the meter), amplitude contour of pulse, meter, musical notes, melodic frequency range, continuity, syllabic contour, melodic rhythm, melodic accents, phrase length, and phrase contour. There are a number of features of prenatal development that allow for the formation of long-term memories of the sounds of the womb in the areas of the brain that are responsible for emotions. Taken together, these features and the similarities between the sounds of the womb and the elemental building blocks of music allow for a postulation that the fetal acoustic environment may provide the bases for the fundamental musical elements that are found in the music of all cultures. This hypothesis is supported by a one-to-one matching of the universal features of music with the sounds of the womb: (1) all of the regularly heard sounds that are present in the fetal environment are represented in the music of every culture, and (2) all of the features of music that are present in the music of all cultures can be traced to the fetal environment.
Article
Full-text available
The origins of music and musical emotions is still an enigma, here I propose a comprehensive hypothesis on the origins and evolution of music, dance, and speech from a biological and sociological perspective. I suggest that every pitch interval between neighboring notes in music represents corresponding movement pattern through interpreting the Doppler effect of sound, which not only provides a possible explanation for the transposition invariance of music, but also integrates music and dance into a common form—rhythmic movements. Accordingly, investigating the origins of music poses the question: why do humans appreciate rhythmic movements? I suggest that human appreciation of rhythmic movements and rhythmic events developed from the natural selection of organisms adapting to the internal and external rhythmic environments. The perception and production of, as well as synchronization with external and internal rhythms are so vital for an organism's survival and reproduction, that animals have a rhythm-related reward and emotion (RRRE) system. The RRRE system enables the appreciation of rhythmic movements and events, and is integral to the origination of music, dance and speech. The first type of rewards and emotions (rhythm-related rewards and emotions, RRREs) are evoked by music and dance, and have biological and social functions, which in turn, promote the evolution of music, dance and speech. These functions also evoke a second type of rewards and emotions, which I name society-related rewards and emotions (SRREs). The neural circuits of RRREs and SRREs develop in species formation and personal growth, with congenital and acquired characteristics, respectively, namely music is the combination of nature and culture. This hypothesis provides probable selection pressures and outlines the evolution of music, dance, and speech. The links between the Doppler effect and the RRREs and SRREs can be empirically tested, making the current hypothesis scientifically concrete.
Article
Full-text available
There is now a vigorous debate over the evolutionary status of music. Some scholars argue that humans have been shaped by evolution to be musical, while others maintain that musical abilities have not been a target of natural selection but reflect an alternative use of more adaptive cognitive skills. One way to address this debate is to break music cognition into its underlying components and determine whether any of these are innate, specific to music, and unique to humans. Taking this approach, Justus and Hutsler (2005) and McDermott and Hauser (2005) suggest that musical pitch perception can be explained without invoking natural selection for music. However, they leave the issue of musical rhythm largely unexplored. This comment extends their conceptual approach to musical rhythm and suggests how issues of innateness, domain specificity, and human specificity might be addressed. © 2006 by the Regents of the University of California. All Right Reserved.
Article
Full-text available
A. D. Patel and J. R. Daniele (2003) compared the rhythms of musical themes written by French and English composers. They found a significant difference that mirrors known prosodic differences in French and English speech. Specifically, Patel and Daniele found the note-to-note durational contrast to be higher in English music than in French music. Their study was based on 137 English themes and 181 French themes that were selected according to stringent criteria. Here we report a replication of Patel and Daniele with a greatly expanded sample of nearly 2000 themes.
Article
Full-text available
The preferences of 2-and 4-month-old infants for consonant versus dis-sonant two-tone intervals was tested by using a looking-time preference procedure. Infants of both ages preferred to listen to consonant over dissonant intervals and found it difficult to recover interest after a sequence of dissonant trials. Thus, sensitivity to consonance and disso-nance is found before knowledge of scale structure and may be based on the innate structure of the inner ear and the firing characteristics of the auditory nerve. It is likely that consonance perception provides a boot-strap into the task of learning the pitch structure of the musical system to which the infant is exposed.
Article
Full-text available
We outline a model of nonhuman primate vocal behavior, proposing that the function of calling is to influence the behavior of conspecific receivers and that a Pavlovian conditioning framework can account for important aspects of how such influence occurs. Callers are suggested to use vocalizations to elicit affective responses in others, thereby altering the behavior of these individuals. Responses can either be unconditioned, being produced directly by the signal itself, or conditioned, resulting from past interactions in which the sender both called and produced affective responses in the receiver through other means.
Chapter
Anthropologists have long recognized that cultural evolution critically depends on the transmission and generation of information. However, between the selection pressures of evolution and the actual behaviour of individuals, scientists have suspected that other processes are at work. With the advent of what has come to be known as the cognitive revolution, psychologists are now exploring the evolved problem-solving and information-processing mechanisms that allow humans to absorb and generate culture. The purpose of this book is to introduce the newly crystallizing field of evolutionary psychology, which supplied the necessary connection between the underlying evolutionary biology and the complex and irreducible social phenomena studied by anthropologists, sociologists, economists, and historians.
Article
Patients with pathological laughter and crying (PLC) are subject to relatively uncontrollable episodes of laughter, crying or both. The episodes occur either without an apparent triggering stimulus or following a stimulus that would not have led the subject to laugh or cry prior to the onset of the condition. PLC is a disorder of emotional expression rather than a primary disturbance of feelings, and is thus distinct from mood disorders in which laughter and crying are associated with feelings of happiness or sadness. The traditional and currently accepted view is that PLC is due to the damage of pathways that arise in the motor areas of the cerebral cortex and descend to the brainstem to inhibit a putative centre for laughter and crying. In that view, the lesions \`disinhibit' or \`release' the laughter and crying centre. The neuroanatomical findings in a recently studied patient with PLC, along with new knowledge on the neurobiology of emotion and feeling, gave us an opportunity to revisit the traditional view and propose an alternative. Here we suggest that the critical PLC lesions occur in the cerebro-ponto-cerebellar pathways and that, as a consequence, the cerebellar structures that automatically adjust the execution of laughter or crying to the cognitive and situational context of a potential stimulus, operate on the basis of incomplete information about that context, resulting in inadequate and even chaotic behaviour.
Article
In two realms of music production-the amplified electric guitar and the recording studio-archaic technology is often the state of the art. Electric guitarists, recording engineers and producers rely upon technology that was perfected in the 1950s and 1960s for much of their sound-making and sound-processing needs because it provides a desirable sonic character. The author asserts that sonic transparency-"uncolored" sound made possible by modern solid-state and/or digital equipment-is antithetical to musical pursuits in which distortion itself is an essential part of the aesthetic.
Article
The physiological mechanisms and acoustic principles underlying sound production in primates are important for analyzing and synthesizing primate vocalizations, for determining the range of calls that are physically producible, and for understanding primate communication in the broader comparative context of what is known about communication in other vertebrates. In this paper we discuss what is known about vocal production in nonhuman primates, relying heavily on models from speech and musical acoustics. We first describe the role of the lungs and larynx in generating the sound source, and then discuss the effects of the supralaryngeal vocal tract in modifying this source. We conclude that more research is needed to resolve several important questions about the acoustics of primate calls, including the nature of the vocal tract's contribution to call production. Nonetheless, enough is known to explore the implications of call acoustics for the evolution of primate communication. In particular, we discuss how anatomy and physiology may provide constraints resulting in “honest” acoustic indicators of body size. © 1995 Wiley-Liss, Inc.