A calibrated recording and analysis of the pitch, force and quality of vocal tones expressing happiness and sadness; and a determination of the pitch and force of the subjective concepts of ordinary, soft and loud tones.

Speech Monographs DOI: 10.1080/03637753509374833

ABSTRACT 9males and 10 females were asked to repeat the vowel
ah immediately after reading a piece of literature and listening to phonographic recordings of music judged by "experts" as indicating sadness and happiness. Oscillographic records of the vowels were made and analyzed. Results showed that the vocal responses to stimuli which evoke happiness are appreciably higher in pitch than the ordinary tones of the same subjects and higher than tones representative of sad states. This difference was found to be significant in both sexes. The average tones in response to literature or music judged as sad are practically the same as that of the subjects' ordinary tones. Differences in intensity and in tone quality were also observed for the two emotional states. Psychogalvanic readings taken during the experiment showed the presence of disturbances of an emotional nature. A second experiment to determine the subjects' conception of ordinary, soft, and loud tones showed that pitch changes with the intensity of the tones. Soft tones are lower in pitch than those designated as ordinary, and loud tones are invariably higher in pitch than either. (PsycINFO Database Record (c) 2012 APA, all rights reserved)

1 Follower
  • [Show abstract] [Hide abstract]
    ABSTRACT: The present study was designed to determine whether the technique used to control the semantic content of emotional communications might influence the results of research on the effects of gender, age, and particular affects on accuracy of decoding tone of voice. Male and female college and elementary school students decoded a 48-item audio tape-recording of emotional expressions encoded by two children and two college students. Six emotions — anger, fear, happiness, jealousy, pride and sadness — were expressed in two types of content-standard messages, namely letters of the alphabet and an affectively neutral sentence. The results of the study indicate that different methods for controlling content can indeed influence the results of studies of determinants of decoding performance. Overall, subjects demonstrated greater accuracy when decoding emotions expressed in the standard sentence than when decoding emotions embedded in letters of the alphabet. A technique by emotion interaction, however, revealed that this was especially true for the purer emotions of anger, fear, happiness and sadness. Subjects identified the less pure emotions of jealousy and pride relatively more accurately when these emotions were embedded in the alphabet technique. The implications of these results for research concerning the vocal communication of affect are briefly discussed.
    Journal of Nonverbal Behavior 01/1985; 9(2):121-129. DOI:10.1007/BF00987143 · 1.77 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Reviews research on the expression of emotion through the nonverbal (prosodic) features of speech. Findings show that emotions can be expressed prosodically, apparently through a variety of prosodic features. This communication appears to be largely the same for different individuals and cultures, suggesting that the prosodic expression of emotion is not conventional. Some correlations between dimensions of emotions (e.g., anxiety, aggression) and prosodic features are discussed; activity or arousal seems to be signaled by increased pitch height, pitch range, loudness, and rate. The possibility that prosodic contours (patterns of pitch and loudness over time) are used to communicate specific emotions is explored. A number of authors suggest that anger is communicated by an even contour with occasional sharp increases in pitch and loudness. Methodological difficulties with the acoustical manipulation of relevant auditory and articulatory features are noted. It is suggested that a major step in investigating the prosodic expression of emotion will be learning how to synthesize various articulatory and auditory features. (3 p ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
    Psychological Bulletin 04/1985; 97(3):412-429. DOI:10.1037/0033-2909.97.3.412 · 14.39 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: neglect of the social signaling functions of affect vocalization, and (c) insufficiently precise concep- tualization of the underlying emotional states. A "component patterning" model of vocal affect expression is proposed that attempts to rink the outcomes of antecedent event evaluation to biologically based response patterns. On the basis of a literature survey of acoustic-phonetic evidence, the likely phonatory and articulatory correlates of the physiological responses characterizing different emotional states are described in the form of three major voice types (narrow-wide, lax-tense, full-thin). Specific predictions are made as to the changes in acoustic parameters resulting from changing voice types. These predictions are compared with the pattern of empirical findings yielded by a comprehensive survey of the literature on vocal cues in emotional expression. Although the comparison is largely limited to the tense-lax voice type (because acoustic parameters relevant to the other voice types have not yet been systematically studied), a high degree of convergence is revealed. It is suggested that the model may help to stimulate hypothesis-guided research as well as provide a framework for the de- velopment of appropriate research paradigms.
    Psychological Bulletin 04/1986; 99(2):143-65. DOI:10.1037//0033-2909.99.2.143 · 14.39 Impact Factor
Show more