ArticlePDF Available

Personality and Uses of Music as Predictors of Preferences for Music Consensually Classified as Happy, Sad, Complex, and Social

American Psychological Association
Psychology of Aesthetics, Creativity, and the Arts
Authors:

Abstract and Figures

This study replicates the findings of a recent study (Chamorro-Premuzic, Gomà-i-Freixanet, Furnham, & Muro, 2009) on the relationship between the Big Five personality traits and everyday uses of music or people's motives for listening to music. In addition, it examined emotional intelligence as predictor of uses of music, and whether uses of music and personality traits predicted liking of music consensually classified as sad, happy, complex, or social. A total of 100 participants rated their preferences for 20 unfamiliar musical extracts that were played for a 30-s interval on a website and completed a measure of the Big Five personality traits. Openness predicted liking for complex music, and Extraversion predicted liking for happy music. Background use of music predicted preference for social and happy music, whereas emotional music use predicted preference for sad music. Finally, males tended to like sad music and use music for cognitive purposes more than females did. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Content may be subject to copyright.
A preview of the PDF is not available
... Further, musical pieces in minor mode and slow tempo, hence with a sad connotation, tend to be more appreciated among listeners with high introversion and empathy, and low emotional stability [155,156]. In addition, high neuroticism levels are associated with stronger sad feelings in response to music and larger use of music to regulate mood and emotions [155,157]. Considering that neuroticism predicts both depression and anxiety disorders [158], these results suggest that emotional distress might affect the preference and perception of major-minor modes. ...
... Moreover, integrating personality assessments could provide fascinating insights into the relationships between musical preferences, personality traits, and acoustic features, broadening the scope of personalized music recommendations and psychological studies in music perception [59][60][61][62][63][64][65]. Note that several participants showed an interest in getting information about how the output of SoundSignature relates to their personality traits and/or mood state (see qualitative results section). ...
Preprint
Full-text available
SoundSignature is a music application that integrates a custom OpenAI Assistant to analyze users' favorite songs. The system incorporates state-of-the-art Music Information Retrieval (MIR) Python packages to combine extracted acoustic/musical features with the assistant's extensive knowledge of the artists and bands. Capitalizing on this combined knowledge, SoundSignature leverages semantic audio and principles from the emerging Internet of Sounds (IoS) ecosystem, integrating MIR with AI to provide users with personalized insights into the acoustic properties of their music, akin to a musical preference personality report. Users can then interact with the chatbot to explore deeper inquiries about the acoustic analyses performed and how they relate to their musical taste. This interactivity transforms the application, acting not only as an informative resource about familiar and/or favorite songs, but also as an educational platform that enables users to deepen their understanding of musical features, music theory, acoustic properties commonly used in signal processing, and the artists behind the music. Beyond general usability, the application also incorporates several well-established open-source musician-specific tools, such as a chord recognition algorithm (CREMA), a source separation algorithm (DEMUCS), and an audio-to-MIDI converter (basic-pitch). These features allow users without coding skills to access advanced, open-source music processing algorithms simply by interacting with the chatbot (e.g., can you give me the stems of this song?). In this paper, we highlight the application's innovative features and educational potential, and present findings from a pilot user study that evaluates its efficacy and usability.
... A robust body of literature has documented the relationship between personality and music preference (Rentfrow & Gosling, 2003;Vella & Mills, 2017); personality and music-evoked emotions (Chamorro-Premuzic et al., 2010;Silvia et al., 2015); music preference and music-evoked emotions (Schäfer & Sedlmeier, 2010); empathy and music-evoked emotions (Eerola et al., 2016). In recent years, there is increasing support for the role of absorption in aesthetic experience in general, which is extended to musical emotions as well. ...
Thesis
Full-text available
Music is widely used for leisure, relaxation purposes in everyday life and in healthcare settings. While existing behavioral studies have mapped out basic relationships between musical features and emotions, it remains unclear why different individuals have similar or different emotional responses to music. This points towards a need to examine music-listener relationship, which could be examined by discussing underlying psychological mechanisms and considering an individual’s level of familiarity with a musical style on music-evoked emotions. Neuroimaging studies have also increasingly suggested the involvement of a widely distributed network of brain regions in music-evoked emotions, beyond the established indices of emotional processing (e.g., frontal alpha asymmetry). Moreover, the role of individual differences in music-evoked emotions, particularly absorption trait, are often either not systematically examined or considered in conjunction with neuropsychological processes. Therefore, this thesis aims to integrate perspectives from psychological sciences and neurocognitive sciences to clarify and examine individual variation in music-evoked emotions.
... However, our results conflict with Qiu et al. (2019), who successfully related Emotional Stability to lyrics-based music preferences when only investigating participants' favorite songs, whose lyrics may be particularly meaningful compared to those of all played songs. While it seems reasonable that Emotional Stability may be connected to music listening (e.g., the emotionality of song lyrics), which is commonly used for emotion regulation, such relationships may vary intra-individually and be dependent on the emotional context of a music listening situation (i.e., the listener's mood; e.g., Chamorro-Premuzic et al., 2010). ...
Article
Full-text available
It is a long-held belief in psychology and beyond that individuals’ music preferences reveal information about their personality traits. While initial evidence relates self-reported preferences for broad musical styles to the Big Five dimensions, little is known about day-to-day music listening behavior and the intrinsic attributes of melodies and lyrics that reflect these individual differences. The present study (N = 330) proposes a personality computing approach to fill these gaps with new insights from ecologically valid music listening records from smartphones. We quantified participants’ music preferences via audio and lyrics characteristics of their played songs through technical audio features from Spotify and textual attributes obtained via natural language processing. Using linear elastic net and non-linear random forest models, these behavioral variables served to predict Big Five personality on domain and facet levels. Out-of-sample prediction performances revealed that – on the domain level – Openness was most strongly related to music listening (r = .25), followed by Conscientiousness (r = .13), while several facets of the Big Five also showed small to medium effects. Hinting at the incremental value of audio and lyrics characteristics, both musical components were differentially informative for models predicting Openness and its facets, whereas lyrics preferences played the more important role for predictions of Conscientiousness dimensions. In doing so, the models’ most predictive variables displayed generally trait-congruent relationships between personality and music preferences. These findings contribute to the development of a cumulative theory on music listening in personality science and may be extended in numerous ways by future work leveraging the computational framework proposed here.
Article
Folk psychology posits that music artists’ first albums are considered their best, whereas later albums draw fewer accolades, and that artists’ second albums are considered worse than their first—a phenomenon called the “sophomore slump.” This work is the first large-scale multi-study attempt to test changes in album quality over time and whether a sophomore slump bias exists. Study 1 examined music critics, sampling all A, B, and C entries from The New Rolling Stone Record Guide (2,078 album reviews, 387 artists, 38 critics). Study 2 examined music fans, sampling crowdsourced Rate Your Music ratings of artists with at least one Rolling Stone top 500 album (4,030 album reviews, 254 artists). Using multilevel models, both studies showed significant linear declines in ratings of artists’ album quality over artists’ careers; however, the linear effects were qualified by significantly positive quadratic effects, suggesting slightly convex patterns where declines were steeper among earlier (vs later) albums. Controlling for these trends, a significant and substantial sophomore slump bias was observed for critics’ ratings, but not for fans’ ratings. We discuss theoretical perspectives that may contribute to the observed effects, including regression to the mean, cognitive biases and heuristics, and social psychological accounts.
Poster
Full-text available
Our findings suggest personality traits are important for bolstering self-selected music’s emotion regulation capabilities. This study contributes novel insights by investigating listener personality traits within the context of a music intervention versus control group experiment, shedding light on a previously area. Moving forward, it is imperative to delve deeper into how individual personality traits shape listeners' utilization of music, paving the way for tailored interventions that harness the full potential of music in promoting emotional well-being
Article
In this study, we examined the associations between music preferences, uses of music and personality factors and facets. The sample included 449 participants (50% female, M = 23.59, SD = 2.14) who indicated preferences for international and regional music styles that were classified into Reflective and Complex, Intense and Rebellious, Upbeat and Conventional, Energetic and Rhythmic, and Regional preferences, and filled in the Uses of Music Inventory and IPIP-300 questionnaire. After controlling for age, gender and uses of music, personality significantly added to the prediction of all music preferences, except Energetic and Rhythmic. Personality factors explained additionally from 9% to 21%, and facets from 18% to 34% of the music preference variance, respectively. Openness, as well as some openness facets, emerged as significant predictors for different music preferences. Our results indicate that when trying to explain preferences with personality traits, the personality traits should be measured at the facet level.
Article
Full-text available
The dominant research strategy within the field of music perception and cognition has typically involved new data collection and primary analysis techniques. As a result, numerous information-rich yet underexplored datasets exist in publicly accessible online repositories. In this paper we contribute two secondary analysis methodologies to overcome two common challenges in working with previously collected data: lack of participant stimulus ratings and lack of physiological baseline recordings. Specifically, we focus on methodologies that unlock previously unexplored musical preference questions. Preferred music plays important roles in our personal, social, and emotional well-being, and is capable of inducing emotions that result in psychophysiological responses. Therefore, we select the Study Forrest dataset “auditory perception” extension as a case study, which provides physiological and self-report demographics data for participants (N = 20) listening to clips from different musical genres. In Method 1, we quantitatively model self-report genre preferences using the MUSIC five-factor model: a tool recognized for genre-free characterization of musical preferences. In Method 2, we calculate synthetic baselines for each participant, allowing us to compare physiological responses (pulse and respiration) across individuals. With these methods, we uncover average changes in breathing rate as high as 4.8%, which correlate with musical genres in this dataset (p < .001). High-level musical characteristics from the MUSIC model (mellowness and intensity) further reveal a linear breathing rate trend among genres (p < .001). Although no causation can be inferred given the nature of the analysis, the significant results obtained demonstrate the potential for previous datasets to be more productively harnessed for novel research.
Article
Full-text available
This review organizes a variety of phenomena related to emotional self-report. In doing so, the authors offer an accessibility model that specifies the types of factors that contribute to emotional self-reports under different reporting conditions. One important distinction is between emotion, which is episodic, experiential, and contextual, and beliefs about emotion, which are semantic, conceptual, and decontextualized. This distinction is important in understanding the discrepancies that often occur when people are asked to report on feelings they are currently experiencing versus those that they are not currently experiencing. The accessibility model provides an organizing framework for understanding self-reports of emotion and suggests some new directions for research.
Article
Full-text available
The authors review the development of the modern paradigm for intelligence assessment and application and consider the differentiation between intelligence-as-maximal performance and intelligence-as-typical performance. They review theories of intelligence, personality, and interest as a means to establish potential overlap. Consideration of intelligence-as-typical performance provides a basis for evaluation of intelligence–personality and intelligence–interest relations. Evaluation of relations among personality constructs, vocational interests, and intellectual abilities provides evidence for communality across the domains of personality of J. L. Holland's (1959) model of vocational interests. The authors provide an extensive meta-analysis of personality–intellectual ability correlations, and a review of interest–intellectual ability associations. They identify 4 trait complexes: social, clerical/conventional, science/math, and intellectual/cultural.
Article
Article
A basic issue about musical emotions concerns whether music elicits emotional responses in listeners (the 'emotivist' position) or simply expresses emotions that listeners recognize in the music (the 'cognitivist' position). To address this, psychophysiological measures were recorded while listeners heard two excerpts chosen to represent each of three emotions: sad, fear, and happy. The measures covered a fairly wide spectrum of cardiac, vascular, electrodermal, and respiratory functions. Other subjects indicated dynamic changes in emotions they experienced while listening to the music on one of four scales: sad, fear, happy, and tension. Both physiological and emotion judgments were made on a second-by-second basis. The physiological measures all showed a significant effect of music compared to the pre-music interval. A number of analyses, including correlations between physiology and emotion judgments, found significant differences among the excerpts. The sad excerpts produced the largest changes in heart rate, blood pressure, skin conductance and temperature. The fear excerpts produced the largest changes in blood transit time and amplitude. The happy excerpts produced the largest changes in the measures of respiration. These emotion-specific physiological changes only partially replicated those found for non-musical emotions. The physiological effects of music observed generally support the emotivist view of musical emotions.
Article
This book introduces multiple-latent variable models by utilizing path diagrams to explain the underlying relationships in the models. This approach helps less mathematically inclined students grasp the underlying relationships between path analysis, factor analysis, and structural equation modeling more easily. A few sections of the book make use of elementary matrix algebra. An appendix on the topic is provided for those who need a review. The author maintains an informal style so as to increase the book's accessibility. Notes at the end of each chapter provide some of the more technical details. The book is not tied to a particular computer program, but special attention is paid to LISREL, EQS, AMOS, and Mx. New in the fourth edition of Latent Variable Models: * a data CD that features the correlation and covariance matrices used in the exercises; * new sections on missing data, non-normality, mediation, factorial invariance, and automating the construction of path diagrams; and * reorganization of chapters 3-7 to enhance the flow of the book and its flexibility for teaching. Intended for advanced students and researchers in the areas of social, educational, clinical, industrial, consumer, personality, and developmental psychology, sociology, political science, and marketing, some prior familiarity with correlation and regression is helpful. © 2004 by Lawrence Erlbaum Associates, Inc. All rights reserved.
Article
The purpose of this study was to examine the personality characteristics and developmental issues of 3 groups of adolescent music listeners: those preferring light qualities of music, those preferring heavy qualities of music, and those who had eclectic preferences for music qualities. One hundred sixty-four adolescents completed an age-appropriate personality inventory and a systematic measure of music listening preference. The findings indicate that each of the 3 music preference groups is inclined to demonstrate a unique profile of personality dimensions and developmental issues. Those preferring heavy or light music qualities indicated at least moderate difficulty in negotiating several distinct domains of personality and/or developmental issues; those with more eclectic music preferences did not indicate similar difficulty. Thus, there was considerable support for the general hypothesis that adolescents prefer listening to music that reflects specific personalities and the developmental issues with which they are dealing.