Article

The voice of confidence: Paralinguistic cues and audience evaluation

Harvard University, Cambridge, Massachusetts, United States
Journal of Research in Personality (Impact Factor: 2). 06/1973; 7(1):31-44. DOI: 10.1016/0092-6566(73)90030-5

ABSTRACT A standard speaker read linguistically confident and doubtful texts in a confident or doubtful voice. A computer-based acoustic analysis of the four tapes showed that paralinguistic confidence was expressed by increased loudness of voice, rapid rate of speech, and infrequent, short pauses. Under some conditions, higher pitch levels and greater pitch and energy fluctuations in the voice were related to paralinguistic confidence. In a 2 × 2 design, observers perceived and used these cues to attribute confidence and related personality traits to the speaker. Both text and voice cues are related to confidence ratings; in addition, the two types of cue are related to differing personality attributes.

3 Followers
 · 
255 Views
  • Source
    • "For example, both low and high vocal fundamental frequencies (F0) have been associated with dominant behavior [13], [14], [15], while a high F0 was an indicator of submissiveness. There has also been research to show that loudness of the vocal signal, greater pitch and a faster speaking rate is correlated with perceptions of dominance for someone reading both a confident and doubtful piece of text [15]. Faster speaking rate is also indicative of competence, which is also Buller and Aune suggested was linked to dominance [16]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: With the increase in cheap commercially available sensors, recording meetings is becoming an increasingly practical option. With this trend comes the need to summarize the recorded data in semantically meaningful ways. Here, we investigate the task of automatically measuring dominance in small group meetings when only a single audio source is available. Past research has found that speaking length as a single feature, provides a very good estimate of dominance. For these tasks we use speaker segmentations generated by our automated faster than real-time speaker diarization algorithm, where the number of speakers is not known beforehand. From user-annotated data, we analyze how the inherent variability of the annotations affects the performance of our dominance estimation method. We primarily focus on examining of how the performance of the speaker diarization and our dominance tasks vary under different experimental conditions and computationally efficient strategies, and how this would impact on a practical implementation of such a system. Despite the use of a state-of-the-art speaker diarization algorithm, speaker segments can be noisy. On conducting experiments on almost 5 hours of audio-visual meeting data, our results show that the dominance estimation is robust to increasing diarization noise.
    IEEE Transactions on Audio Speech and Language Processing 06/2011; 19(4-19):847 - 860. DOI:10.1109/TASL.2010.2066267 · 2.63 Impact Factor
  • Source
    • "Like high F 0 variation, intensity is a characteristic of high activation emotions—fear, anger, and joy (Banse and Scherer 1996). More confident individuals speak with greater intensity (Kimble and Seidel 1991), and high intensity is associated with perceptions of dominance (Aronovitch 1976; Scherer et al. 1973). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Low mean fundamental frequency (F(0)) in men's voices has been found to positively influence perceptions of dominance by men and attractiveness by women using standardized speech. Using natural speech obtained during an ecologically valid social interaction, we examined relationships between multiple vocal parameters and dominance and attractiveness judgments. Male voices from an unscripted dating game were judged by men for physical and social dominance and by women in fertile and non-fertile menstrual cycle phases for desirability in short-term and long-term relationships. Five vocal parameters were analyzed: mean F(0) (an acoustic correlate of vocal fold size), F(0) variation, intensity (loudness), utterance duration, and formant dispersion (D(f), an acoustic correlate of vocal tract length). Parallel but separate ratings of speech transcripts served as controls for content. Multiple regression analyses were used to examine the independent contributions of each of the predictors. Physical dominance was predicted by low F(0) variation and physically dominant word content. Social dominance was predicted only by socially dominant word content. Ratings of attractiveness by women were predicted by low mean F(0), low D(f), high intensity, and attractive word content across cycle phase and mating context. Low D(f) was perceived as attractive by fertile-phase women only. We hypothesize that competitors and potential mates may attend more strongly to different components of men's voices because of the different types of information these vocal parameters provide.
    Human Nature 12/2010; 21(4):406-427. DOI:10.1007/s12110-010-9101-5 · 1.96 Impact Factor
  • Source
    • "So far, there has been little empirical research on acoustic voice characteristics of self-confidence. Nevertheless, the following tendencies could be observed: confident speakers appeared to have a less monotonous way of presenting, less and shorter breaks, a fast rate of speech, a lower voice level, a higher intensity of speech, a hard voice quality, a short latency of response, and only a few corrections of own mistakes [1] [2]. Most studies have analyzed single features or small feature sets, only containing perceptual acoustic features, whereas signal processing based speech and speaker recognition features (e.g. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The aim of this study is to compare several classifiers commonly used within the field of speech emotion recognition (SER) on the speech based detection of self-confidence. A standard acoustic feature set was computed, resulting in 170 features per one-minute speech sample (e.g. fundamental frequency, intensity, formants, MFCCs). In order to identify speech correlates of self-confidence, the lectures of 14 female participants were recorded, resulting in 306 one-minute segments of speech. Five expert raters independently assessed the self-confidence impression. Several classification models (e.g. Random Forest, Support Vector Machine, Naïve Bayes, Multi-Layer Perceptron) and ensemble classifiers (AdaBoost, Bagging, Stacking) were trained. AdaBoost procedures turned out to achieve best performance, both for single models (AdaBoost LR: 75.2% class-wise averaged recognition rate) and for average boosting (59.3%) within speaker-independent settings.
    20th International Conference on Pattern Recognition, ICPR 2010, Istanbul, Turkey, 23-26 August 2010; 01/2010
Show more