Table 1 - uploaded by Daniel Shanahan
Content may be subject to copyright.
Source publication
Forty-four Western-enculturated musicians completed two studies. The first group was asked to judge the relative sadness of forty-four familiar Western instruments. An independent group was asked to assess a number of acoustical properties for those same instruments. Using the estimated acoustical properties as predictor variables in a multiple reg...
Similar publications
Musical engagement may be associated with better listening skills, such as the perception of and working memory for notes, in addition to the appreciation of musical rules. The nature and extent of this association is controversial. In this study we assessed the relationship between musical engagement and both sound perception and working memory.
W...
Citations
... Moreover, the concept of the neutralizing effect of musical emotions stipulates that music induces feelings such as affinity, pleasure, or convergence, which are intrinsic to the rewarding experience music bestows upon its listeners (Vuoskoski et al., 2011). Certain listeners experience sorrow induced by empathetic reactions to sad sound features, learned associations, and cognitive rumination (Huron, 2011;Huron et al., 2014). Melancholic music evokes a profound esthetic emotion that pleases those with particular affective inclinations. ...
Music, an influential environmental factor, significantly shapes cognitive processing and everyday experiences, thus rendering its effects on creativity a dynamic topic within the field of cognitive science. However, debates continue about whether music bolsters, obstructs, or exerts a dual influence on individual creativity. Among the points of contention is the impact of contrasting musical emotions–both positive and negative–on creative tasks. In this study, we focused on traditional Chinese music, drawn from a culture known for its ‘preference for sadness,’ as our selected emotional stimulus and background music. This choice, underrepresented in previous research, was based on its uniqueness. We examined the effects of differing music genres (including vocal and instrumental), each characterized by a distinct emotional valence (positive or negative), on performance in the Alternative Uses Task (AUT). To conduct this study, we utilized an affective arousal paradigm, with a quiet background serving as a neutral control setting. A total of 114 participants were randomly assigned to three distinct groups after completing a music preference questionnaire: instrumental, vocal, and silent. Our findings showed that when compared to a quiet environment, both instrumental and vocal music as background stimuli significantly affected AUT performance. Notably, music with a negative emotional charge bolstered individual originality in creative performance. These results lend support to the dual role of background music in creativity, with instrumental music appearing to enhance creativity through factors such as emotional arousal, cognitive interference, music preference, and psychological restoration. This study challenges conventional understanding that only positive background music boosts creativity and provides empirical validation for the two-path model (positive and negative) of emotional influence on creativity.
... Previous findings have suggested that difference in sound attributes such as brightness and spectral entropy may impact the emotional quality of the music [11,32]. Different instruments have been investigated with respect to emotional qualities, however, mostly as individual instruments rather than ensembles [50][51][52]. We wanted to test groups of instruments rather than individual instruments as this allowed us to test polyphonic music and a bigger register range simultaneously. ...
... In Experiment 1, a woodwinds ensemble was specifically used to convey sadness [11,32,54], while a strings ensemble was distinctively not chosen. The opposite findings can be seen in Experiment 2, where a strings instrumentation was used for sadness [20,51], and woodwinds instrumentation was specifically not used. Brass instrumentation was chosen for joy in Experiment 1 [21,32], while a woodwinds ensemble was preferred in Experiment 2. Strings instrumentation was specifically not used to convey joy in both experiments. ...
... The instrument and emotion association results produced in this paper partially adhere to a handful of studies that looked at individual instruments rather than instrument ensembles [11,32,51,54]. However, there are also conflicting results both between the findings of the two current experiments and other previous findings. ...
Multiple approaches have been used to investigate how musical cues are used to shape different emotions in music. The most prominent approach is a perception study, where musical stimuli varying in cue levels are assessed by participants in terms of their conveyed emotion. However, this approach limits the number of cues and combinations simultaneously investigated, since each variation produces another musical piece to be evaluated. Another less used approach is a production approach, where participants use cues to change the emotion conveyed in music, allowing participants to explore a larger number of cue combinations than the former approach. These approaches provide different levels of accuracy and economy for identifying how cues are used to convey different emotions in music. However, do these approaches provide converging results? This paper's aims are two-fold. The role of seven musical cues (tempo, pitch, dynamics, brightness, articulation, mode, and instrumentation) in communicating seven emotions (sadness, joy, calmness, anger, fear, power, and surprise) in music is investigated. Additionally, this paper explores whether the two approaches will yield similar findings on how the cues are used to shape different emotions in music. The first experiment utilises a production approach where participants adjust the cues in real-time to convey target emotions. The second experiment uses a perception approach where participants rate pre-rendered systematic variations of the stimuli for all emotions. Overall, the cues operated similarly in the majority (32/49) of cue-emotion combinations across both experiments, with the most variance produced by the dynamics and instrumentation cues. A comparison of the prediction accuracy rates of cue combinations representing the intended emotions found that prediction rates in Experiment 1 were higher than the ones obtained in Experiment 2, suggesting that a production approach may be a more efficient method to explore how cues are used to shape different emotions in music.
... It is tempting to apply this to the realm of music, which can be seen as having the power to affect the quality of life through emotion regulation and observable effects on both behavior and brain functioning [175]. Music, in this view, can be used for selfreflection-an ability that requires internally directed cognition-as seen most typically in the case of listening to sad music, which is considered by some as a major source of enjoyment [9,10,51,103,104,140,[176][177][178][179][180]. Listening to sad music, moreover, has also been put in relation to the phenomenon of depressive realism, which states that people are more realistic when they are sad. ...
This article is a hypothesis and theory paper. It elaborates on the possible relation between music as a stimulus and its possible effects, with a focus on the question of why listeners are experiencing pleasure and reward. Though it is tempting to seek for a causal relationship, this has proven to be elusive given the many intermediary variables that intervene between the actual impingement on the senses and the reactions/responses by the listener. A distinction can be made, however, between three elements: (i) an objective description of the acoustic features of the music and their possible role as elicitors; (ii) a description of the possible modulating factors—both external/exogenous and internal/endogenous ones; and (iii) a continuous and real-time description of the responses by the listener, both in terms of their psychological reactions and their physiological correlates. Music listening, in this broadened view, can be considered as a multivariate phenomenon of biological, psychological, and cultural factors that, together, shape the overall, full-fledged experience. In addition to an overview of the current and extant research on musical enjoyment and reward, we draw attention to some key methodological problems that still complicate a full description of the musical experience. We further elaborate on how listening may entail both adaptive and maladaptive ways of coping with the sounds, with the former allowing a gentle transition from mere hedonic pleasure to eudaimonic enjoyment.
... Additionally, it offers the possibility of formally assessing how developments in instruments' technology influence their emotional affordances. Studies exploring the relationship between instruments' design and their emotional palate highlight the importance of these inquiries (de Souza, 2017;Huron et al., 2014;Schutz et al., 2008). Several aspects of our data complement more traditional experimental approaches-such as the strong contribution of attack rate to arousal ratings and mode's influence on valence for both composers. ...
A growing body of research analyzing musical scores suggests mode’s relationship with other expressive cues has changed over time. However, to the best of our knowledge, the perceptual implications of these changes have not been formally assessed. Here, we explore how compositional choices of 17th- and 19th-century composers (J. S. Bach and F. Chopin, respectively) differentially affect emotional communication. This novel exploration builds on our team’s previous techniques using commonality analysis to decompose intercorrelated cues in unaltered excerpts of influential compositions. In doing so, we offer an important naturalistic complement to traditional experimental work—often involving tightly controlled stimuli constructed to avoid the intercorrelations inherent to naturalistic music. Our data indicate intriguing changes in cues’ effects between Bach and Chopin, consistent with score-based research suggesting mode’s “meaning” changed across historical eras. For example, mode’s unique effect accounts for the most variance in valence ratings of Chopin’s preludes, whereas its shared use with attack rate plays a more prominent role in Bach’s. We discuss the implications of these findings as part of our field’s ongoing effort to understand the complexity of musical communication—addressing issues only visible when moving beyond stimuli created for scientific, rather than artistic, goals.
... For example, Hailstone et al. (2009) found that affect recognition of melodies was influenced by pairing the melodies with particular instruments. Huron et al. (2014) observed that participants' judgments of an instrument's capacity for sadness were correlated with acoustic properties known to contribute to sad prosody in speech. Schutz et al. (2008) investigated a corpus of music for xylophone, an instrument which is acoustically limited to produce short durations and a brighter timbre and therefore may be limited in its semantic affordances due to its inability to mimic sad speech prosody. ...
This paper offers a series of characterizations of prototypical musical timbres, called Timbre Trait Profiles, for 34 musical instruments common in Western orchestras and wind ensembles. These profiles represent the results of a study in which 243 musician participants imagined the sounds of various instruments and used the 20-dimensional model of musical instrument timbre qualia proposed by Reymore and Huron (2020) to rate their auditory image of each instrument. The rating means are visualized through radar plots, which provide timbral-linguistic thumbprints, and are summarized through snapshot profiles, which catalog the six highest- and three lowest-rated descriptors. The Euclidean distances among instruments offer a quantitative operationalization of semantic distances; these distances are illustrated through hierarchical clustering and multidimensional scaling. Exploratory Factor Analysis is used to analyze the latent structure of the rating data. Finally, results are used to assess Reymore and Huron’s 20-dimensional timbre qualia model, suggesting that the model is highly reliable. It is anticipated that the Timbre Trait Profiles can be applied in future perceptual/cognitive research on timbre and orchestration, in music theoretical analysis for both close readings and corpus studies, and in orchestration pedagogy.
... Participants listened to a sample of songs from the Billboard corpus categorized as having either a descending or non-descending bass line and rated the presence of musical features related to sadness. The study borrows this paradigm from previous research by Juslin & Lukka (Juslin & Laukka, 2003), and Huron et al. (Huron, Anderson, & Shanahan, 2014). Shea hypothesizes that the presence of a descending bass line would more strongly predict ratings for loudness, tempo, interval size, articulation, and timbre than those with non-descending bass lines. ...
This commentary focuses on Shea (2019) and its relationship to much of the literature on popular and rock music. The commentary offers some methodological considerations on the construction of corpora for this type of analysis. The commentary questions the operational definition of 'lament,' and laud's Shea's work in attempting to create a more thorough, objective definition. Ultimately, the commentary concludes that, while Shea's approach is worthwhile, a much larger corpus must be generated in order to draw meaningful conclusions from it.
... General acoustic properties of a recording may also provide more insight into affect than harmonic setting. In response, a second study was conducted that examines the relationship between songs with descending bass lines and five additional acoustical-musical cues that have been demonstrated to be associated with sadness (Huron, Anderson, & Shanahan, 2014). ...
... Sad-sounding musical passages have been reported to exhibit the following features: quieter dynamic levels, slower tempi, smaller pitch movements, relatively low overall pitch, "mumble-like" articulations, and darker timbres (Juslin & Laukka, 2003;Huron, Anderson, & Shanahan, 2014). Experiment 1 did not and could not address these pertinent features, for two reasons. ...
A descending bass line coordinated with sad lyrics is often described as evoking the "lament" topic—a signal to listeners that grief is being conveyed (Caplin, 2014). In human speech, a similar pattern of pitch declination occurs as air pressure is lost ('t Hart, Collier, & Cohen, 1990) which—coordinated with the premise that sad speech is lower in pitch (Lieberman & Michaels, 1962)—suggests there may be a cognitive-ecological association between descending bass lines and negative emotion more broadly. This study reexamines the relationship between descending bass lines and sadness in songs with lyrics. First, two contrasting repertoires were surveyed: 703 cantata movements by J. S. Bach and 740 popular music songs released ca. 1950–1990. Works featuring descending bass lines were identified and bass lines extracted by computationally parsing scores for bass or the lowest sounding musical line that descends incrementally by step. The corresponding lyrics were then analyzed using the Linguistic Inquiry and Word Count (Pennebaker et al., 2015a, 2015b). Results were not consistent with the hypothesis that descending bass lines are associated with a general negative affect and thus also not specifically with sadness. In a follow-up behavioral study, popular music excerpts featuring a descending bass were evaluated for the features of sad sounds (Huron, Anderson, & Shanahan, 2014) by undergraduate musicians. Here, tempo and articulation, but not interval size as anticipated, were found to be the best predictors of songs with descending bass lines.
... Apparently, yes. In the case of sad music, much evidence supports the concept of a voice-music homologwhere music emulates the acoustical features of melancholic or grief-related vocalizations (Juslin and Laukka, 2003;Schutz et al., 2008;Paul and Huron, 2010;Huron et al., 2014). Overall, the research suggests that music and vocal prosody share common affective resources for both production and perception. ...
Drawing on recent empirical studies on the enjoyment of nominally sad music, a general theory of the pleasure of tragic or sad portrayals is presented. Not all listeners enjoy sad music. Multiple studies indicate that those individuals who enjoy sad music exhibit a particular pattern of empathic traits. These individuals score high on empathic concern (compassion) and high on imaginative absorption (fantasy), with only nominal personal distress (commiseration). Empirical studies are reviewed implicating compassion as a positively valenced affect. Accordingly, individuals who most enjoy sad musical portrayals experience a pleasurable prosocial affect (compassion), amplified by empathetic engagement (fantasy), while experiencing only nominal levels of unpleasant emotional contagion (commiseration). It is suggested that this pattern of trait empathy may apply more broadly, accounting for many other situations where spectators experience pleasure when exposed to tragic representations or portrayals.
... An experimental study by Huron, Anderson, and Shanahan (2014) showed that certain instruments are associated with specific emotions and that the instrumentation of a song can affect the listeners. This result seems promising for the present investigation because popular music is often released in different versions, with varying musical production elements (e.g., studio, live, acoustic, or remix). ...
... Emotions, especially empathetic emotions, seem to be the key factor influenced by music with prosocial lyrics (Greitemeyer, 2009b). The previously described empirical studies have indicated that musical production elements such as instrumentation (Huron et al., 2014) can intensify emotions. Therefore, it seems reasonable to argue that a song that features more fitting musical production elements supports the effect of prosocial lyrics on the listener's thoughts, emotions and behavior, as described in the theoretical model. ...
Popular music with prosocial lyrics affects listeners’ thoughts, emotions and behavior, yet little is known about the role played by the actual music in this process. This study focused on the interaction between the prosocial lyrics and the musical production elements, examining whether certain versions of a song can enhance the effect of prosocial lyrics on thoughts, emotions and behavior. Based on the general learning model and the reciprocal-feedback model of music perception, a laboratory experiment ( N = 136) was conducted to test how listeners are affected by music with prosocial or neutral lyrics and by an electronic or an unplugged version of the music. For this purpose, an original song was composed and produced, using the same melodies and harmonies with varied lyrics and instrumentation. In a pilot study ( n = 36), a version with acoustic instrumentation was rated as the most emotional and fitting, whereas an electronic dance version was rated as the least emotional and fitting. There was a significant interaction effect between the lyrics and the musical production elements: Those listening to the unplugged version with prosocial lyrics showed the most empathetic emotions. Prosocial lyrics also had an effect on prosocial thoughts but not on behavior.
... Melancholic speech, for example, tends to be spoken in a quieter-than-normal voice, a slower speaking rate, lower-than-normal overall pitch, a monotone voice, in a mumbling fashion, and with dark timbre (Kraepelin, 1921). Melancholic-music tends to mirror these prosodic characteristics: melancholic music is quieter, slower, lower in pitch, has smaller pitch movements, is legato, and uses darker timbres (Huron, 2008;Huron, Anderson, & Shanahan, 2014;Schutz, Huron, Keeton, & Loewer, 2008;Turner & Huron, 2008;Post & Huron, 2009;Yim, Huron, & Chordia, in preparation). Grief sounds, on the other hand, include features such as vocalized punctuated exhaling, energetic sustained tones (wails), ingressive vocalization, use of falsetto phonation, breaking voice, pharyngealization, creaky voice, and sniffling Urban, 1988). ...
Music, perhaps more than any other art form, is able to influence moods and affect behavior. There are limitless accounts of music eliciting feelings of nostalgia, transcendence, and other seemingly ineffable emotions. In the scientific study of music and emotion, however, only five music-induced emotions have been studied in depth: happiness, sadness, fear, anger, and tenderness (Juslin, 2013). Although these emotions are certainly important and can be expressed and elicited through music listening, a pertinent question becomes the following: do these five words accurately capture all affective states related to music? Throughout my dissertation, I argue that in order to better understand emotional responses to musical stimuli, we need to change the way we use emotional terminology and examine emotional behaviors.
In the first part of the dissertation (Chapters 1-4), I review how emotional music has been theoretically characterized and which excerpts have been utilized in research. I will show that the field of music and emotion is fraught with conceptual difficulties and that passages of music expressing a single emotion (e.g., sadness) span an unmanageably large area of emotional space. The second part of the dissertation (Chapters 5-8) provides an in-depth analysis of music that has been classified by other researchers as sad. I will show that previous research has conflated at least two separable emotional states under the umbrella term sadness: melancholy and grief. Through a series of behavioral experiments, I argue that melancholic and grief-like music utilize different kinds of music-theoretic structures, are perceived as separate emotional states, and result in different feeling states. In the last part of the dissertation (Chapters 9-11), I offer two possible interpretations of the research findings, drawing first from the field of ethology to show that melancholy and grief could be separable emotion states that have different biological functions and vocal characterizations (e.g., Huron, 2015). Then, I advocate for the adoption of a psychological phenomenon called emotional granularity (e.g., Barrett, 2004). Emotional granularity refers to the specificity with which a person labels their emotional states, and is both an individual characteristic and a learnable skill. The dissertation concludes with ideas for future research, including the investigation of how the musical structure may result in subtle shades of emotion previously unrecognized in the music psychology literature.