ArticlePDF Available

Abstract

Musicians and nonmusicians listened to major, minor, and dissonant musical chords while their BOLD brain responses were registered with functional magnetic resonance imaging. In both groups of listeners, minor and dissonant chords, compared with major chords, elicited enhanced responses in several brain areas, including the amygdala, retrosplenial cortex, brain stem, and cerebellum, during passive listening but not during memorization of the chords. The results indicate that (1) neural processing in emotion-related brain areas is activated even by single chords, (2) emotion processing is enhanced in the absence of cognitive requirements, and (3) musicians and nonmusicians do not differ in their neural responses to single musical chords during passive listening.
450
Ann. N.Y. Acad. Sci. 1060: 450–453 (2005). © 2005 New York Academy of Sciences.
doi: 10.1196/annals.1360.047
Emotion Processing of Major, Minor,
and Dissonant Chords
A Functional Magnetic Resonance Imaging Study
KAREN JOHANNE PALLESEN,a,b ELVIRA BRATTICO,c
CHRISTOPHER BAILEY,a ANTTI KORVENOJA,d JUHA KOIVISTO,b
ALBERT GJEDDE,a AND SYNNÖVE CARLSONb
aCenter of Functionally Integrative Neuroscience and PET Centre,
Aarhus University Hospital, 8000 Aarhus C, Denmark
bNeuroscience Unit, Helsinki Brain Research Center, Institute of Biomedicine/
Physiology, cCognitive Brain Research Unit, Helsinki Brain Research Center,
University of Helsinki, 00014 Helsinki, Finland
dFunctional Brain Imaging Unit, Helsinki Brain Research Center,
00029 Helsinki University Central Hospital, Helsinki, Finland
ABSTRACT: Musicians and nonmusicians listened to major, minor, and disso-
nant musical chords while their BOLD brain responses were registered with
functional magnetic resonance imaging. In both groups of listeners, minor and
dissonant chords, compared with major chords, elicited enhanced responses in
several brain areas, including the amygdala, retrosplenial cortex, brain stem,
and cerebellum, during passive listening but not during memorization of the
chords. The results indicate that (1) neural processing in emotion-related brain
areas is activated even by single chords, (2) emotion processing is enhanced in
the absence of cognitive requirements, and (3) musicians and nonmusicians do
not differ in their neural responses to single musical chords during passive
listening.
KEYWORDS: emotions; music; musical competence; working memory;
emotion–cognition interaction
INTRODUCTION
Major, minor, and dissonance in music are commonly said to cause happy, sad,
and unpleasant experiences, respectively. Behavioral studies have shown that these
emotional effects may be elicited by brief melodic fragments and even by isolated
chords in musicians as well as nonmusicians.1 Dissonance presented in melodies
was previously related to activity in the right parahippocampal gyrus and right
Address for correspondence: Karen Johanne Pallesen, Center for Functionally Integrative
Neuroscience, Aarhus University Hospital, Nørrebrogade 44, 8000 Aarhus C, Denmark. Voice:
+45-89494095; fax: +45-89494400.
karenjohanne@pet.auh.dk
451PALLESEN et al.: EMOTION PROCESSING OF CHORDS
precuneus brain areas.2 Moreover, the downregulating effects of appraisal and
cognition on emotion processes were demonstrated in terms of amygdalar responses
to aversive visual stimuli.3 In spite of the well-proven power of music to elicit posi-
tive or negative emotional experiences, the mechanisms for emotional regulation
have not so far been studied in this domain. Because musical competence, reflected
in the neurophysiological auditory responses, implies a more analytical approach to
musical sounds, this may also reflect in the neural emotion processes. We studied
whether (1) simple musical chords activate brain areas previously associated with
emotion analysis, (2) cognitive evaluation has an influence on these responses, and
(3) musical competence influences the emotional responses.
METHODS
Twenty-one right-handed individuals (mean age 26; 14 females), 11 subjects with
a classical music education (musicians), and 10 subjects with no musical training
(nonmusicians), were subjected to nine piano chords belonging to three pitch classes
(major, minor, dissonant), each spanning three octaves from A3 to A5. The subjects
either listened passively to the chords or performed an n-back working memory task
with respect to pitch. After the brain scanning, the subjects were asked to rate the
emotional connotations of each chord on two 11-point scales (as unpleasant-pleasant
and sad-happy, respectively). In all conditions, subjects pressed a button after each
chord to maintain motor-related brain activity constant across conditions. Magnetic
resonance images were acquired with a 1.5-T Siemens Sonata scanner. Analysis was
performed using FMRIB Software Library (FSL, Oxford, UK). Each subject’s struc-
tural and functional data were coregistered to the MNI152 standard template.
FIGURE 1. BOLD responses that were larger during passive listening to minor chords
than during passive listening to major chords, including the amygdala, retrosplenial cortex,
brain stem, and cerebellum. The responses in this contrast were not present during working
memory. Group results are overlayed on transverse sections and inflated cortex of a single
individual (corrected for multiple comparisons: Z > 3, P < .05).
452 ANNALS NEW YORK ACADEMY OF SCIENCES
RESULTS
During passive listening, minor and dissonant chords elicited larger BOLD re-
sponses than did major chords in several brain areas, including the amygdala, retro-
splenial cortex, brain stem, and cerebellum (FIG. 1 represents the minor versus major
chords contrast). Together with the thalamus and brain stem, the amygdala has been
implicated in the evolution of an adaptive “alarm system.”4 These differential
responses to minor and dissonant chords, compared with major chords, were
not present during the pitch working memory task, requiring cognitive evaluation of
the chords. Although musicians rated minor chords as sadder and dissonant chords
as more unpleasant, than did nonmusicians (FIG. 2), there were no significant differ-
ences between the two groups of subjects in the neural responses to the chords
during passive listening.
CONCLUSION
The results provide evidence for a role of emotional reactions to isolated musical
sound units in the musical experience. Moreover, they confirm that cognitive evalu-
ation leads to decreased emotional responsiveness. We suggest that the amygdala–
brain stem responses, during passive listening, to minor and dissonant chords, com-
pared with major chords, reflect a mechanism that automatically interprets these
chords as being potentially alarming stimuli. The group difference in the emotional
ratings of the chords, but not in the neural responses, may reflect a musician’s ability
to recognize and categorize the chords in terms of the conventional emotional
connotations, rather than an effective enhanced emotional experience.
[Competing interests: The authors declare that they have no competing financial
interests.]
FIGURE 2. Emotional ratings (mean ± SE) of major, minor, and dissonant chords.
Musicians rated dissonant chords as significantly more unpleasant, and minor chords as
significantly more sad than nonmusicians did (P < .05).
453PALLESEN et al.: EMOTION PROCESSING OF CHORDS
REFERENCES
1. PALLESEN, K.J., E. BRATTICO & S. CARLSON. 2003. Emotional connotations of major
and minor musical chords in musically untrained listeners. Brain Cogn. 51: 188–190.
2. BLOOD, A. et al. 1999. Emotional responses to pleasant and unpleasant music corre-
late with activity in paralimbic brain regions. Nat. Neurosci. 2: 382–387.
3. HARIRI, A.R., S.Y. BOOKHEIMER & J.C. MAZZIOTTA. 2000. Modulating emotional
responses: effects of a neocortical network on the limbic system. Neuroreport 11:
43–48.
4. LIDDELL, B.J. et al. 2005. A direct brainstem-amygdala-cortical “alarm” system for
subliminal signals of fear. Neuroimage 24: 235–243.
... Empirical evidence has supported that music structures are largely related to emotional responses evoked by musical excerpts (e.g., Cunningham & Sterling, 1988;Dalla Bella et al., 2001;Dolgin & Adelson, 1990;Gagnon & Peretz, 2003;Hunter et al., 2010;Juslin, 2013;Kastner & Crowder, 1990;Pallesen et al., 2005;Terwogt & Van Grinsven, 1991; for reviews, see Eerola &Vuoskoski, 2013, andJuslin &Laukka, 2003). For instance, major modes and fast tempi evoke happiness, whereas minor modes and slow tempi evoke sadness (e.g., Dalla Bella et al., 2001;Gabrielsson & Juslin, 1996;Peretz et al., 1998). ...
... Another possible limitation of the present experiment is the need for more rigorous control of the participants' previous musical knowledge. For example, in Pallesen et al. (2005), nonmusicians (N = 10) and musicians (N = 11) judged the emotional valence of major, minor, and dissonant chords. In general, both groups rated the musical chords similarly. ...
... In summary, this study demonstrated that the evaluative functions of musical excerpts were transferred to abstract figures through equivalence class formation. Moreover, interesting issues related to the modulatory effect on the transfer of function of distinctive aspects of musical stimuli such as the combination of musical properties (i.e., mode and tempo; Koelsch, 2005;Peretz et al., 1998) and the formal history of musical education (e.g., Bakker et al., 2015;Pallesen et al., 2005;Steinbeis & Koelsch, 2011;Zhou et al., 2019), remain to be answered. Doing so could provide a more comprehensive understanding of the effects of using emotional stimuli in equivalence class formation experiments and in the phenomenon of transfer of function. ...
Article
Empirical evidence has supported that musical excerpts written in major and minor modes are responsible for evoking happiness and sadness, respectively. In this study, we evaluated whether the emotional content evoked by musical stimuli would transfer to abstract figures when they became members of the same equivalence class. Participants assigned to the experimental group were submitted to a training procedure to form equivalence classes comprising musical excerpts (A) and meaningless abstract figures (B, C, and D). Afterward, transfer of function was evaluated using a semantic differential. Participants in the control group showed positive semantic differential scores for major mode musical excerpts, negative scores for minor mode musical excerpts, and neutral scores for the B, C, and D stimuli. Participants in the experimental groups showed positive semantic differential scores for visual stimuli equivalent to the major modes and negative semantic differential scores for visual stimuli equivalent to the minor modes. These results indicate transfer of function of emotional content present in musical stimuli through equivalence class formation. These findings could provide a more comprehensive understanding of the effects of using emotional stimuli in equivalence class formation experiments and in transfer of function itself.
... Just as components of Western music (WM) have been studied (e.g. chords [5]), the components of Indian music also need to be studied [6]. Common elements between WM and IAM do exist, Examples of (a) constant pitch segment and (b,c) deliberate transients in IAM but several components of IAM are different from those of Western music. ...
... A pre-requisite is to understand the emotional impact of genres of music being employed and their components. For example, in WM, emotion-related brain areas are activated even by a single musical chord [5]. The impact of IAM is usually studied in relation to rāgas. ...
Conference Paper
Tunes perceived as happy may help a user reach an affective state of positive valence. However, a user with negative valence may not be ready to listen to such a tune immediately. In this paper, we consider nudging a user from their current affective state to a target affective state in small steps. We propose a technique to generate a gradation of tunes between an initial-reference tune and a target-reference tune, to achieve the affect transition. The two-dimensional gradation is realized in time and in pitch, respectively, by varying the tempo and by the use of musical pitch curves, i.e. pitch transients or simply 'transients'. We exploit the duration and scaling of transients observed in South Indian music (Carnatic) to introduce transients into existing tunes. In our experiment, we have introduced the transients into Western music tunes. The results of perceptual evaluation show that the affective response to transients is likely to be higher at slow tempos than at fast tempos. Further, when felt, transient-tunes are twice as likely to be associated with positive valence than with negative valence, irrespective of tempo.
Preprint
As global warming intensifies due to human activities, understanding the impact of urban activities on natural ecosystems has become increasingly crucial. Across the globe, soundscape analysis has been widely explored as a method to assess ecological changes and develop ecological acoustic theories. However, current soundscape analyses, which rely primarily on statistical computations of sound indices, often fall short in deciphering specific sound events and their dynamic processes. This study introduces a novel method framework, the "Harmonic Soundscape Analysis", which integrates chord analysis from music theory with soundscape ecology to analyze continuous sound events and their evolution in urban forests.
Article
In this editorial, Drs Peppercorn and noted neuroscientists Miller and Hasselmo comment on a recent randomized trial of music to reduce stress during infusion, noting that our understanding of the brain supports a unique and particularly effective role for music in improving mood and reducing distress for patients with cancer
Article
Full-text available
Music is a complex phenomenon with multiple brain areas and neural connections being implicated. Centuries ago, music was discovered as an efficient modality for psychological status enrichment and even for the treatment of multiple pathologies. Modern research investigations give a new avenue for music perception and the understanding of the underlying neurological mechanisms, using neuroimaging, especially magnetic resonance imaging. Multiple brain areas were depicted in the last decades as being of high value for music processing, and further analyses in the neuropsychology field uncover the implications in emotional and cognitive activities. Music listening improves cognitive functions such as memory, attention span, and behavioral augmentation. In rehabilitation, music-based therapies have a high rate of success for the treatment of depression and anxiety and even in neurological disorders such as regaining the body integrity after a stroke episode. Our review focused on the neurological and psychological implications of music, as well as presenting the significant clinical relevance of therapies using music.
Article
Full-text available
Background Research on music-induced emotion and brain activity is constantly expanding. Although studies using inter-subject correlation (ISC), a collectively shared brain activity analysis method, have been conducted, whether ISC during music listening represents the music preferences of a large population remains uncertain; additionally, it remains unclear which factors influence ISC during music listening. Therefore, here, we aimed to investigate whether the ISCs of electroencephalography (EEG) during music listening represent a preference for music reflecting engagement or interest of a large population in music. Methods First, we selected 21 pieces of music from the Billboard Japan Hot 100 chart of 2017, which served as an indicator of preference reflecting the engagement and interest of a large population. To ensure even representation, we chose one piece for every fifth song on the chart, spanning from highly popular music to less popular ones. Next, we recorded EEG signals while the subjects listened to the selected music, and they were asked to evaluate four aspects (preference, enjoyment, frequency of listening, and arousal) for each song. Subsequently, we conducted ISC analysis by utilizing the first three principal components of EEG, which were highly correlated across subjects and extracted through correlated component analysis (CorrCA). We then explored whether music with high preferences that reflected the engagement and interest of large population had high ISC values. Additionally, we employed cluster analysis on all 21 pieces of music, utilizing the first three principal components of EEG, to investigate the impact of emotions and musical characteristics on EEG ISC during music listening. Results A significant distinction was noted between the mean ISC values of the 10 higher-ranked pieces of music compared to the 10 lower-ranked pieces of music [t(542) = −1.97, p = 0.0025]. This finding suggests that ISC values may correspond preferences reflecting engagement or interest of a large population. Furthermore, we found that significant variations were observed in the first three principal component values among the three clusters identified through cluster analysis, along with significant differences in arousal levels. Moreover, the characteristics of the music (tonality and tempo) differed among the three clusters. This indicates that the principal components, which exhibit high correlation among subjects and were employed in calculating ISC values, represent both subjects’ arousal levels and specific characteristics of the music. Conclusion Subjects’ arousal values during music listening and music characteristics (tonality and tempo) affect ISC values, which represent the interest of a large population in music.
Chapter
For the last 20 years, there has been impressive progress in cognitive neuroscience and neuroimaging. Unfortunately, the translation from basic research to real world clinical applications in the domain of mental disorders has been limited. A major contribution of functional MRI would be to provide computational biomarkers for precision medicine and treatment optimization. Before this is possible, a number of open conceptual and methodological challenges need to be addressed, and some of them will be outlined in this chapter. For illustration purposes, we focus on the amygdala and its relevance in anxiety disorders. We cover a few central aspects on how certain pitfalls can preclude reliable empirical work on the amygdala and, thus, robust computational biomarkers. Overcoming them requires multidisciplinary collaboration on the level of psychiatric and neuroscientific theories on the one hand, and on fMRI acquisition, signal processing, and analysis on the other. While we are optimistic that neuroimaging will provide a substantial contribution for the discovery of computational biomarkers in psychiatry, psychopathologies and their neuropathophysiology are complex so we cannot propose an easy solution. Instead, we focus on some challenges that emerge in the overlap of theory and application.Key wordsEmotionsAnxietyMental disordersAmygdalaFunctional MRI
Article
Music is ubiquitous across human cultures — as a source of affective and pleasurable experience, moving us both physically and emotionally — and learning to play music shapes both brain structure and brain function. Music processing in the brain — namely, the perception of melody, harmony and rhythm — has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain’s fundamental capacity for prediction — as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective. People may respond to listening to music by physically moving or feeling emotions. In this Review, Peter Vuust and colleagues discuss how music perception and related actions, emotions and learning are associated with the predictive capabilities of the human brain, with a focus on their predictive coding of music model.
Article
Full-text available
Audiences, juries, and critics continually evaluate performers based on their interpretations of familiar classics. Yet formally assessing the perceptual consequences of interpretive decisions is challenging – particularly with respect to how they shape emotional messages. Here, we explore the issue through comparison of emotion ratings (using scales of arousal and valence) for excerpts of all 48 pieces from Bach’s Well-Tempered Clavier. In this series of studies, participants evaluated one of seven interpretations by highly regarded pianists. This work offers the novel ability to simultaneously explore (1) how different interpretations by expert pianists shape emotional messages, (2) the degree to which structural and interpretative elements shape the clarity of emotional messages, and (3) how interpretative differences affect the strength of specific features or cues to convey musical emotion.
Article
Full-text available
Neural correlates of the often-powerful emotional responses to music are poorly understood. Here we used positron emission tomography to examine cerebral blood flow (CBF) changes related to affective responses to music. Ten volunteers were scanned while listening to six versions of a novel musical passage varying systematically in degree of dissonance. Reciprocal CBF covariations were observed in several distinct paralimbic and neocortical regions as a function of dissonance and of perceived pleasantness/unpleasantness. The findings suggest that music may recruit neural mechanisms similar to those previously associated with pleasant/unpleasant emotional states, but different from those underlying other components of music perception, and other emotions such as fear.
Article
Full-text available
Humans share with animals a primitive neural system for processing emotions such as fear and anger. Unlike other animals, humans have the unique ability to control and modulate instinctive emotional reactions through intellectual processes such as reasoning, rationalizing, and labeling our experiences. This study used functional MRI to identify the neural networks underlying this ability. Subjects either matched the affect of one of two faces to that of a simultaneously presented target face (a perceptual task) or identified the affect of a target face by choosing one of two simultaneously presented linguistic labels (an intellectual task). Matching angry or frightened expressions was associated with increased regional cerebral blood flow (rCBF) in the left and right amygdala, the brain's primary fear centers. Labeling these same expressions was associated with a diminished rCBF response in the amygdalae. This decrease correlated with a simultaneous increase in rCBF in the right prefrontal cortex, a neocortical region implicated in regulating emotional responses. These results provide evidence for a network in which higher regions attenuate emotional responses at the most fundamental levels in the brain and suggest a neural basis for modulating emotional experience through interpretation and labeling.
Article
We examined whether consciously undetected fear signals engage a collateral brainstem pathway to the amygdala and prefrontal cortex in the intact human brain, using functional neuroimaging. 'Blindsight' lesion patients can respond to visual fear signals independently from conscious experience, suggesting that these signals reach the amygdala via a direct pathway that bypasses the primary visual cortex. Electrophysiological evidence points to concomitant involvement of prefrontal regions in automatic orienting to subliminal signals of fear, which may reflect innervation arising from brainstem arousal systems. To approximate blindsight in 22 healthy subjects, facial signals of fear were presented briefly (16.7 ms) and masked such that conscious detection was prevented. Results revealed that subliminal fear signals elicited activity in the brainstem region encompassing the superior colliculus and locus coeruleus, pulvinar and amygdala, and in fronto-temporal regions associated with orienting. These findings suggest that crude sensory input from the superior colliculo-pulvinar visual pathway to the amygdala may allow for sufficient appraisal of fear signals to innervate the locus coeruleus. The engagement of the locus coeruleus could explain the observation of diffuse fronto-temporal cortical activity, given its role in evoking collateral ascending noradrenergic efferents to the subcortical amygdala and prefrontal cortex. This network may represent an evolutionary adaptive neural 'alarm' system for rapid alerting to sources of threat, without the need for conscious appraisal.