Article

A calibrated recording and analysis of the pitch, force and quality of vocal tones expressing happiness and sadness; and a determination of the pitch and force of the subjective concepts of ordinary, soft and loud tones.

Speech Monographs DOI: 10.1080/03637753509374833

ABSTRACT 9males and 10 females were asked to repeat the vowel
ah immediately after reading a piece of literature and listening to phonographic recordings of music judged by "experts" as indicating sadness and happiness. Oscillographic records of the vowels were made and analyzed. Results showed that the vocal responses to stimuli which evoke happiness are appreciably higher in pitch than the ordinary tones of the same subjects and higher than tones representative of sad states. This difference was found to be significant in both sexes. The average tones in response to literature or music judged as sad are practically the same as that of the subjects' ordinary tones. Differences in intensity and in tone quality were also observed for the two emotional states. Psychogalvanic readings taken during the experiment showed the presence of disturbances of an emotional nature. A second experiment to determine the subjects' conception of ordinary, soft, and loud tones showed that pitch changes with the intensity of the tones. Soft tones are lower in pitch than those designated as ordinary, and loud tones are invariably higher in pitch than either. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

0 Bookmarks
 · 
38 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Music is a powerful medium capable of eliciting a broad range of emotions. Although the relationship between language and music is well documented, relatively little is known about the effects of lyrics and the voice on the emotional processing of music and on listeners' preferences. In the present study, we investigated the effects of vocals in music on participants' perceived valence and arousal in songs. Participants (N = 50) made valence and arousal ratings for familiar songs that were presented with and without the voice. We observed robust effects of vocal content on perceived arousal. Furthermore, we found that the effect of the voice on enhancing arousal ratings is independent of familiarity of the song and differs across genders and age: females were more influenced by vocals than males; furthermore these gender effects were enhanced among older adults. Results highlight the effects of gender and aging in emotion perception and are discussed in terms of the social roles of music.
    Frontiers in Psychology 01/2013; 4:675. · 2.80 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: The present study was designed to determine whether the technique used to control the semantic content of emotional communications might influence the results of research on the effects of gender, age, and particular affects on accuracy of decoding tone of voice. Male and female college and elementary school students decoded a 48-item audio tape-recording of emotional expressions encoded by two children and two college students. Six emotions — anger, fear, happiness, jealousy, pride and sadness — were expressed in two types of content-standard messages, namely letters of the alphabet and an affectively neutral sentence. The results of the study indicate that different methods for controlling content can indeed influence the results of studies of determinants of decoding performance. Overall, subjects demonstrated greater accuracy when decoding emotions expressed in the standard sentence than when decoding emotions embedded in letters of the alphabet. A technique by emotion interaction, however, revealed that this was especially true for the purer emotions of anger, fear, happiness and sadness. Subjects identified the less pure emotions of jealousy and pride relatively more accurately when these emotions were embedded in the alphabet technique. The implications of these results for research concerning the vocal communication of affect are briefly discussed.
    Journal of Nonverbal Behavior 01/1985; 9(2):121-129. · 1.77 Impact Factor
  • Source
    Creating Personal, Social, and Urban Awareness Through Pervasive Computing, Edited by IGI Global, 11/2013: chapter 3: pages 53-85;