Content uploaded by Éric Hanigan
Author content
All content in this area was uploaded by Éric Hanigan on Feb 02, 2023
Content may be subject to copyright.
https://doi.org/10.1177/03057356221146811
Psychology of Music
1 –18
© The Author(s) 2023
Article reuse guidelines:
sagepub.com/journals-permissions
DOI: 10.1177/03057356221146811
journals.sagepub.com/home/pom
Validation of the Measure of
Emotions by Music (MEM)
Éric Hanigan1, Arielle Bonneville-Roussy1,
Gilles Dupuis1 and Christophe Fortin2
Abstract
If music connects to our most resonant emotional strings, why have not we used it to assess
emotions? Our goal was to develop and validate the Measure of Emotions by Music (MEM). A total
of 280 participants were randomly assigned to MEM Condition 1 (excerpts) or MEM Condition 2
(excerpts and adjectives). All participants responded to the PANAS-X. The internal consistency (α)
of the MEM subscales (Happy, Sad, Scary, Peaceful) was in acceptable-to-strong range and similar
to the PANAS-X. Construct validity of the MEM illustrated cohesive convergence to the PANAS-X.
Confirmatory factor analysis confirmed the validity of a four-factor solution, as intended in the MEM.
Split-half reliability shows good fidelity. A total of 69% of the respondents mentioned a preference for
the MEM. The MEM demonstrates very good psychometric characteristics, seems to be an appreciated
way to measure emotional states and may represent an interesting alternative for clinical groups
having difficulties to identify their emotion with words.
Keywords
music, emotion, affective state, assessment, validation of the Measure of Emotions by Music (MEM)
Music and emotions has been of interest since the dawn of time (Rochon, 2018). Leading
thinkers and scientists believe that music predates language and allowed our ancestors to have
a system of emotional communication, social cohesion, and sexual selection. In particular,
Schopenhauer and Jankélévitch believe that music manages to make us feel emotions that are
so deep within us that they are ineffable (Jankélévitch, 1983/2003; Schopenhauer, 1819/1909).
Over the past decades, much scientific research has focused on how music can evoke emotions.
However, to our knowledge, there is no measure of emotion that uses music in the same way as
words to describe the emotions felt and there is a need for innovative ways to access emotion.
For instance, people on the autism spectrum condition (ASC) and with alexithymia have
1Department of Psychology, Université du Québec à Montréal, Montreal, QC, Canada
2Department of Psychology, University of Ottawa, Ottawa, ON, Canada
Corresponding author:
Éric Hanigan, Department of Psychology, Université du Québec à Montréal, 100 rue Sherbrooke Ouest, Montreal, QC,
Canada.
Email: hanigan.eric@courrier.uqam.ca
1146811POM0010.1177/03057356221146811Psychology of MusicHanigan et al.
research-article2023
Original Empirical Investigations
2 Psychology of Music 00(0)
difficulty identifying emotions verbally (Akbari et al., 2021). In a systematic review, Huggins et
al. (2020) emphasize the importance of developing alternative ways for self-administered ques-
tionnaires to better measure emotional self-awareness and improve our understanding of how
autistic persons identify and feel their own emotions. Therefore, this study aims to validate a
new measure of emotion using the emotional musical excerpts validated by Vieillard et al.
(2008), as the language of feeling.
Taking into account some limitations of verbal reports (written or spoken) to assess emo-
tions, the advantages of using musical excerpts are of great importance at least at two levels.
First, self-reported verbal questionnaires are affected by social desirability, that is, the ten-
dency to under or overvalue socially unwelcome or desirable attitudes (Latkin et al., 2017;
Näher & Krumpal, 2012) as well as language barrier, that is, misunderstanding of the mean-
ing of words due of language proficiency (Choi & Pak, 2004). Also, there are documented
translation issues of some emotions in some languages that have no direct words to describe
the same emotion (Lomas, 2021; Zentner & Eerola, 2010). Music may be a useful way to
circumvent these issues, as music is less culturally sensitive (Balkwill et al., 2004; Fang et al.,
2017; Fritz et al., 2009) and may be perceived or felt effortlessly by the listener if used as
proxies for specific emotions. In fact, emotional response to music is processed automatically
by the brain (Bigand & Poulin-Charronnat, 2006) and may be free of cultural barriers (Fritz
et al., 2009). Fritz and colleagues (2009) showed that although Western listeners performed
significantly better than native Africans (Mafas) on a task aimed at recognizing three emo-
tions (happy, sad, and scary) using musical excerpts, Mafas still identified these excerpts as
proxies for such emotions.
Also, cross-cultural research has shown that culture has little effect on the recognition of
emotions in music. Indeed, Western listeners can discriminate emotions in Indian raga music
(Balkwill & Thompson, 1999), and Japanese listeners can discriminate between the emotions
of Wester n and Hindustani music (Balkwill et al., 2004). Moreover, intense emotional responses
to music are related to clarity, sudden increase in sound, and rigidity of music, regardless of
cultural aspects (Beier et al., 2022). Finally, comparing major and minor modes of Western
music with Chinese listeners, Fang et al. (2017) showed that recognition of Western music
emotions was valid cross-culturally.
Secondly, some populations (i.e., ASC or alexithymia) having difficulty identifying their
emotions (Akbari et al., 2021) through words may benefit from an alternative way to better
represent how they feel. A measure of emotion with music may be suitable in this context.
Therefore, there is a need to provide a reliable and valid measure of emotions through music
and this article aims to address this issue.
Theoretical background
Philosophical debate between emotivism and cognitivism
There are two main currents in music and emotion research: emotivism and cognitivist (Vempala
& Russo, 2018). For the emotivists, listeners recognize and feel the emotions elicited by music,
whereas for the cognitivists, music does not induce true emotions but is rather perceived as a
symbol that represents human emotions (Davies, 2010; Konečni, 2008; Sloboda & Juslin,
2010). Recently, Vempala and Russo (2018) have combined both perspectives to create a meta-
cognitive model of emotion recognition in music. In this model, emotional judgment in music
depends on an interdependence of cognitive processes and basic emotional mechanisms. This
article proposed an approach that will use musical excerpts specifically composed to represent
Hanigan et al. 3
basic emotions. The musical excerpts will thus generate basic emotions in the listeners, and
they will use cognitive processes to introspect on their emotional state.
Music and emotion
Studies since 2000 have brought scientific evidence that music can induce specific emotions
and can activate regions in the brain typically associated with emotions (Chanda & Levitin,
2013; Koelsch, 2014; Särkämö, 2018; Zatorre & Salimpoor, 2013). For instance, newborns
have innate preference for consonances and not dissonances (Trainor et al., 2002). This mech-
anism is governed by the parahippocampal gyrus, amygdala, hippocampus, and temporal poles
(Gosselin et al., 2005, 2007; Koelsch et al., 2006). fMRI studies show that these regions are
activated by exposure to unpleasant music (dissonance) and decrease their activity with pleas-
ant music (consonance; Koelsch et al., 2006). Regions, such as the amygdala are implicated for
scary music, and patients with amygdala damage have difficulty identifying “scary” musical
excerpts. Interestingly, patients with brain damage who are unable to recognize a familiar mel-
ody are, nevertheless, able to recognize emotions associated with the melody (Peretz et al.,
1998, 2001). Moreover, individuals with congenital amusia are still able to identify emotions
of music even though they are unable to recognize the music (Gosselin et al., 2015).
Peretz et al. (1998) have shown that a musical excerpt of 250 ms is sufficient to discriminate
sad from happy music. Excerpts associated with anger, happiness, and sadness could be identi-
fied within a duration of less than 100 ms (Nordström & Laukka, 2019). Finally, several
researchers have shown that music induces the same emotional responses as a basic emotion
(Gagnon & Peretz, 2003; Mitterschiffthaler et al., 2007; Witvliet & Vrana, 2007). For example,
in Lundqvist et al. (2009), musical excerpts were chosen to represent a specific emotion and
these excerpts induced the same emotions in the listeners.
Variations and combinations of different musical features such as tempo, major or minor
mode, consonance (positive affect), and dissonance (negative affect), can be manipulated to
express specific emotions (see table one in Juslin & Laukka, 2004). Emotional excerpts validated
by Vieillard et al. (2008), and used in this study, take advantage of this knowledge.
Measuring emotion verbally and nonverbally
There is a wide range of ways to measure emotions verbally (written or spoken words) or nonverbally
(without words; mostly physiological data), each with its own benefits and limitations. The most com-
mon way to assess emotions induced by music is verbally by self-report with Likert scales (Eerola,
2018). These include standardized measures such as the Differential Emotion Scale (Izard et al.,
1993), the PANAS, and the expanded version (PANAS-X; Watson & Clark, 1994; Watson et al.,
1988), and the Profile of Mood States (Mcnair et al., 1971). These measures lack of phenomenologi-
cal data of the emotions felt with music (Lee et al., 2020; Zentner & Eerola, 2010). Self-report meas-
ures require to choose from a predetermined category and are thus subject to report bias. Moreover,
words may not always be interpreted the same way from the perspective of the researcher and the
participants or even between participants. To date, self-reported questionnaires may be the most reli-
able to distinguish basic emotions (Barrett, 2006), and are easy to quantify (Zentner & Eerola, 2010).
However, considering that words are not always obvious to represent emotions and are highly subjec-
tive, Zentner and Eerola suggest exploring other measurement methods of emotions.
Emotions can also be measured peripherally and indirectly through nonverbal methods
(Eerola, 2018). The peripheral method consists of measuring emotions by physiological reac-
tions (heart rate variability, skin conductance, respiration, facial electromyography, and
4 Psychology of Music 00(0)
temperature) (for a review, see Hodges, 2010). A limitation of these measures is that they are
not always sensitive to the emotional experience at stance (Etzel et al., 2006; Nater et al., 2006).
Also, these measures are more likely to assess valence or arousal levels of an emotional response
rather than discrete emotions (Cacioppo et al., 2000; Mauss & Robinson, 2009; Siegel et al.,
2018). Although this method may be the most objective, it might not be reliable (Barrett et al.,
2019). Meta-analyses shows that physiological reactions associated with a specific emotion are
inconsistent (Cacioppo et al., 2000; Siegel et al., 2018; Stemmler, 2004). For indirect meas-
ures, they evaluate emotional states through automatic and non-intentional reactions (Fazio &
Olson, 2003). These measures do not require introspection so they are less likely to be biased
(Lee et al., 2020). However, these measures may be less sensitive than direct measures.
Measuring emotions through behavioral reactions may provide cues about emotion since
some are associated with behaviors (e.g., avoidance and fear) (Lang et al., 1983). Yet, observing
behaviors would not adequately distinguish discrete emotions (Lee et al., 2020), and there is
insufficient evidence to support that emotions are always associated with behaviors (DeWall et
al., 2015; Schwarz & Clore, 2007).
Emotional recognition of visual stimuli can also assess emotions nonverbally. The systematic
review by Barrett et al. (2019) highlights that the reliability of perceiving emotional states of
facial stimuli depends mostly on how participants are asked to complete the tests, and whether
the answers are verbal or nonverbal. Particularly, identifying emotional states from a short list
of adjectives (a verbal response) tends to show moderate to strong evidence. However, this limits
participants to forced choices and is subject to the same biases as verbal measures. Interestingly,
when participants are in a free labeling task (verbal), inferences of emotional states vary con-
siderably. Additionally, expression and perception of facial emotions varies from culture and
situation and would be determined more by experience and environment than by universal
recognition (Barrett et al., 2019; Caldara, 2017). Furthermore, facial patterns may not be reli-
able enough to deduct emotional states (Barrett et al., 2019). They point out that we don’t
know how facial expressions translate emotions in everyday life and that there is a need to
study the mechanisms by which people perceive emotion.
As illustrated, verbal and nonverbal measures have both benefits and limitations. Yet, in
music and emotion research, the aforementioned measures were used to assess music-induced
emotion. None of them have sought to assess participants’ emotional states through, and not
as a result of, musical excerpts. Given the limitations of verbal self-reports of emotions, replac-
ing them with music could be a unique and enjoyable way to overcome these limitations.
Using music to assess emotional states
Since there is a need for alternative ways to assess emotions (Huggins et al., 2020), music is an
interesting avenue. Numerous studies have shown that music can elicit genuine emotions
(Lundqvist et al., 2009; Vieillard et al., 2008; Vuoskoski & Eerola, 2011). The meta-analysis of
Juslin and Laukka (2004) also revealed that emotion judgments in music remain consistent
across listeners in over 100 studies. Surprisingly, to our knowledge, no existing tool has used
musical excerpts, as response choices, to assess emotions. Thus, our goal is to fill this gap by
developing and validating the Measure of Emotions by Music (MEM), a measure that uses musi-
cal excerpts to assess emotions through music.
Objective and hypothesis
The purpose of this study was to develop and validate a new psychometric tool for measuring
emotional states with musical excerpts as response choices. Based on Vempala and Russo’s
Hanigan et al. 5
(2018) model, the main hypothesis of this research is that music can be used to assess emo-
tions. This study will answer the following question: Does the Measure of Emotion by Music
(MEM) represent a psychometric valid and reliable way of measuring emotional states?
Method
Participants
From December 2020 to April 2021, 280 participants were recruited on Web platforms like
Facebook, LinkedIn, and by email through student associations of French-speaking universities
in Quebec and Ontario, Canada. The inclusion criteria were being at least 18 years old and
understanding French. The exclusion criteria was having psychological health issues or hear-
ing health conditions not corrected by a prosthesis that could affect hearing.
Ethical considerations
This research was approved by the Research Ethics Board for student projects involving humans
at both participating universities (2020–2021). Participants were informed of the nature and
objective of the study and signed a consent form. Considering that participants may experience
negative emotions, references to psychosocial support were added to the consent form.
Measures
Sociodemographic questionnaire. It contains questions such as age, gender (female, male, other),
education level (elementary, high school, diploma of vocational study, college, bachelor’s
degree, master’s, and above), type of program, current studies (full-time, part-time) and/or
working (full-time, part-time), type of job, and annual income.
Positive and Negative Affect Schedule–Expanded Form. The PANAS-X contains 60 adjectives of
emotional states (Watson & Clark, 1994). Items are divided into four categories: (1) General
Dimension Scales (positive affect, negative affect), (2) Basic Negative Affect Scales (fear, hostil-
ity, guilt, sadness), (3) Basic Positive Affect Scales (joviality, self-assurance, attentiveness), and
(4) Other Affective States (shyness, fatigue, serenity, surprise).
The choice of response is in Likert form (never, a little, moderately, quite often, always) and
the test last approximately 10 min.
MEM. The emotional musical excerpts used have been validated by Vieillard et al. (2008). Their
database contains 56 classical piano excerpts recorded in mp3 or MIDI format. These original
excerpts have been conceived to be associated with the following emotions: happy, sad, scary,
and peaceful. For the MEM, the 32 first MIDI files in their database were used. To listen to the
musical excerpts or to consult the methodology used in their design, please visit the Isabelle
Peretz Laboratory.1
In the first experiment of Vieillard et al. (2008), the mean percentage of the correspondence
between participants’ emotion identification and the intended emotion of the music excerpts
was 99 % for happiness, 84 % for sadness, 82 % for scariness, and 67 % for peacefulness. The
overall weighted Kappa coefficient (measuring interrater reliability) was 0.824. In their second
experiment, the percentage of correct recognition was 95% for happiness, 86% for sadness,
71% for scariness, and 91% for peacefulness. The speed of recognition was 483 ms for happi-
ness, 1,446 ms for sadness, 1,737 ms for scariness, and 1,261 ms for peacefulness. In the last
6 Psychology of Music 00(0)
experiment, a strategy of dissimilarity judgments by presenting 16 excerpts in 780 paired com-
binations (ABBA) was shown. The Pearson coefficient for the reliability of dissimilarity judg-
ments was 0.87, suggesting a high agreement. The happy, sad, and scary excerpts were also
cross-culturally validated (Fritz et al., 2009).
The MEM was completed under two conditions (see Supplementary File 1 online). The time
required to complete each condition was approximately 10 min. Both conditions contain two
identical elements:
1. The two conditions have the same eight sets of four musical excerpts for a total of 32
excerpts (eight peaceful, eight happy, eight scary, and eight sad). The eight excerpts of
each emotion are the first ones provided in the database of the Isabelle Peretz Laboratory.
Each excerpt is represented by identical icons. When participants click on the icon, they
can hear the excerpt. They can listen to the four excerpts of a set and then click on the
one that represents the emotion they have felt for the last 24 hrs.
2. From one set of questions to the next, the excerpts were moved one answer choice to the
right so that the four emotions are placed equally at each position (see Supplementary
File 2 online).
In the second condition only, participants had to answer a short multiple choice written
questionnaire (happy, sad, scary or peaceful) to identify the emotion associated with the excerpt.
The MEM (Conditions 1 and 2) and the PANAS-X were integrated into a single questionnaire
on the Qualtrics web application.
Procedure
When entering Qualtrics, participants were randomly assigned to one of the two MEM conditions
and had to complete the MEM and the PANAS-X (Watson & Clark, 1994). In both MEM conditions
and when filling the PANAS-X, participants responded to the sentence: “In the past 24 hours, I
have felt.” For the MEM, participants were asked to choose the musical excerpt that corresponded
to their emotional state. In total, participants were presented with eight consecutive questions
each having four musical excerpts. The participants could choose only one musical excerpt per
question, the one that best represented their current emotion. At the end of the study, partici-
pants were asked to identify which questionnaire (MEM or PANAS-X) they preferred.
Analytic strategy
First, descriptive statistics are presented to characterize the two groups on the different sociode-
mographic variables and describe the results of the MEM and the PANAS-X.
The validation of the MEM followed two steps:
1. Construct validity: (a) Internal consistency of each of the four subscales (peaceful,
happy, scary, sad) was examined with Cronbach’s alpha coefficient;2 (b) Convergence of
the MEM and the PANAS-X subscale scores was assessed with stepwise regressions. The
dependent variables are the four subscales of the MEM, and the independent variables
are the 11 subscales of the PANAS-X. Residuals of regressions models were checked for
normality. The stepwise method has been chosen for three reasons: the goal was to
explore which PANAS-X subset of variables best explained the variance of each MEM
variables, rather than to verify a particular existing model or to compare different
Hanigan et al. 7
models; the prediction of MEM scores with the PANAS-X subscales scores was also
exploratory and not a main objective; it was not possible to exclude a priori some of the
PANAS-X subscales on the basis of some preexisting empirical or theoretical rationale.
Considering drawbacks of stepwise regression, Bayesian regression was used to chal-
lenge models found (Genell et al., 2010). Finally, we used a Bootstrap approach
(N = 1,000) to verify the stability of the regression coefficients obtained by the different
stepwise models; (c) Last step for construct validity, we performed a confirmatory factor
analysis (CFA) with the full MEM dataset. We used structural equation modeling (SEM)
and mediation analysis. We report the chi-square (χ2) statistic, and to examine how the
four-factor model of emotions fit our empirical findings, we used the following model fit
indicators: comparative fit index (CFI) and Tucker–Lewis Index (TLI), with values >.95
considered excellent; and root-mean square error of approximation (RMSEA) with val-
ues below .06 and upper confidence interval below .08 deemed excellent (Marsh et al.,
2004). As our indicators were dichotomous (coded 0 and 1), we used the robust
weighted least square, mean and variance adjusted estimator, which provides less biased
solutions with categorical data. There were no missing values.
2. Split-half correlation with a Spearman-Brown coefficient (rSB) was used to calculate the
correlation between the two halves of the test. The eight excerpts per emotion have been
therefore separated (1, 2, 3, 4 vs. 5, 6, 7, 8) and the correlation served as an index of
item reliability. Test–retest reliability was not evaluated considering the fluctuations of
moods and the possibility of memorizing the musical excerpts.
Secondary analyses verify if participants correctly identified emotions in the second MEM
condition. Contingency coefficients were calculated between the score of each MEM subscale
and the score of each adjective, with the sum of choices made. The standardized coefficient
(Blaikie, 2003) is the contingency coefficient divided by the maximum contingency coef ficient.3
This standardized coefficient varies from 0 to 1 independently of the number of cells in the
table. Bootstrapping was performed for each coefficient to obtain confidence intervals. The last
secondary analysis is descriptive and reports the percentage of preference for the MEM and the
PANAS-X.
Results
Descriptive statistics
In MEM Condition 1, mean age of the participants (n = 128) was 29.05 years (SD = 11.71). For
MEM Condition 2, participants were on average 30.50 years of age (SD = 11.85; n = 141).
Participants were mainly from Canada (75.19%), were female (80.45%) and had a University
degree (51.88%; see in Supplementary File 3 online).
Descriptive analyses
MEM. Eight new continuous variables were created that were the sum of choices (0 = not
selected; 1 = selected) of peaceful, happy, scary, and sad excerpts and for the adjectives (in condi-
tion 2). This resulted in eight variables with scores ranging from 0 to 8 per participant. Table 1
shows the descriptive statistics of the MEM. For the MEM Condition 2, the mean choice and
standard deviation of adjectives goes as follows: Peaceful (2.78 and 2.76), Happy (2.76 and
2.72), Scary (.80 and 1.59), and Sad (1.66 and 2.58).
8 Psychology of Music 00(0)
PANAS-X. Mean values and standard deviations of subscales were as follows (N = 280): Seren-
ity (9.47 and 2.79), Joviality (23.62 and 7.18), Fear (11.73 and 4.69), and Sadness (10.64 and
4.94). The means, standard deviations, and Cronbach’s alphas of all the PANAS-X subscales
are shown in Supplementary File 4 online. The Cronbach’s alphas were: Negative Affect, 0.88;
Positive Affect, 0.88; Serenity, 0.75; Joviality, 0.91; Fear, 0.83; and Sadness 0.85. Alphas of the
Watson and Clark (1994) sample and those of the current sample are similar.
Construct validity
Internal consistency of the MEM scales. Consistency analyses (Cronbach’s alphas) revealed good
coherence of the items to the scale for the MEM subscales: Peaceful, 0.85; Happy, 0.88; Scary,
0.79, and Sad, 0.86.
Convergence between MEM subscales and PANAS-X subscales
Peaceful excerpts. The overall model was significant, F(3,276) = 24.68, p < .001, and
explained 21.1% of the variance (20.3% adjusted variance). Table 2 shows the values of the
standardized coefficients, the t-test, and the semi-partial correlations of the four subscales of
the MEM. The PANAS-X serenity explained 10.5% of the variance in the peaceful MEM, the
self-assurance 2.62%, and the sadness 2.53%. This means that a high score on serenity of the
PANAS-X, a low score on self-assurance and on sadness are all associated with a high score on
the peaceful excerpts.
Happy excerpts. The overall model was significant, F(3,275) = 35.14, p < .001, and explained
33.8% of the variance (32.9% adjusted variance). Joviality of the PANAS-X explained 18.06%
of the variance in the happy MEM, the attentiveness 1.56%, serenity 1.93%, and sadness
1.82%. Thus, by having a high score in joviality and a low score in the other subscales men-
tioned, participants tended to choose more happy excerpts.
Scary excerpts. The overall model was significant, F(4,275) = 28.3, p < .001, and explained
29.2% of the variance (28.1% adjusted variance). Four subscales of the PANAS-X were signifi-
cant predictors of the scary excerpts: Fear (5.15% of the variance), Serenity (2.13%), Attention
(3.5%), and Joviality (2.7%). These results suggest that the higher participants scored on the
PANAS-X fear and attentiveness, and the lower they scored on the serenity and joviality, the
more scary excerpts were chosen.
Table 1. Mean Values and Standard Deviations of the Choice of Musical Excerpts.
MEM Condition 1
(n = 133)
MEM Condition 2
(n = 147)
MEM total
(n = 280)
MSDMSDMSD
Excerpts
Peaceful 3.15 2.76 2.58 2.59 2.85 2.69
Happy 1.59 2.27 2.50 2.76 2.07 2.58
Scary 1.41 1.94 .90 1.61 1.15 1.79
Sad 1.84 2.29 2.01 2.54 1.93 2.42
Note. MEM = Measure of Emotions by Music.
Hanigan et al. 9
Sad excerpts. The overall model was significant, F(2,277) = 98.94, p < .001, and explained
41.7% of the variance (41.2% adjusted variance). Joviality explained 10.96% of the variance
and sadness 7.95%. The lower the participants scored on joviality of the PANAS-X, and the
higher they scored on sadness, the more likely participants were to choose sad excerpts.
Table 3 presents the results of the bootstrap procedure and the Bayesian regressions. The
unstandardized coefficients for each variable of each model were compared with those obtained
by the bootstrap procedure. As seen, the Bootstrap-corrected Bayesian coefficients were very
close of the original regressions. In sum, the Bayesian regressions confirmed the models
obtained by the stepwise regressions.
The best models chosen for each MEM variable correspond to those obtained by the step-
wise regressions. The BFM factors revealed that the odds of each model fitting the data
increased markedly compared with the equal prior probability hypothesis (i.e., before observ-
ing the data). For example, the selected factor coefficient was 256.05 for the best model of the
MEM Happy. Compared with the next best model that includes only the following PANAS-X
subscales (Joviality, Attentiveness, Serenity, and Sadness) which is 72.01, the previous model
fits the data better. For the MEM Peaceful, the best factor coefficient was 140.89 compared
with 58.99 for the next best model. For the MEM Scary, it is 127.25 compared with 90.45.
Finally, for the MEM Sad, the coefficient of the best model was 382.42 compared to 221.84.
The probabilities that the best models fit the data better as compared (1/BF10) with the next
well-fitting model varies from 1.38 to 3.27. This means that the model chosen for the MEM
Happy is 3.27 times more likely to be the best model to the empirical data compared with the
model that has the second highest coefficient.
Table 2. Multiple Regressions between the MEM and PANAS-X Subscales.
Unstandardized t p ra
MEM Peaceful
Serenityb.39 t(278) = 6.06 <.001 .32
Self-assurance –.10 t(277) = –3.03 .003 –.16
Sadness –.10 t(276) = –2.98 .003 –.16
MEM Happy
Joviality .23 t(278) = 8.67 <.001 .43
Attentiveness –.12 t(277) = –2.55 .011 –.13
Serenity –.16 t(276) = –2.83 .005 –.14
Sadness –.09 t(275) = –2.75 .006 –.14
MEM Scary
Fear .11 t(278) = 4.48 <.001 .23
Serenity –.13 t(277) = –2.88 .004 –.15
Attentiveness .13 t(276) = 3.67 <.001 .19
Joviality –.06 t(275) = –3.30 .001 –.17
MEM Sad
Joviality –.13 t(278) = –7.21 <.001 –.33
Sadness .17 t(277) = 6.14 <.001 .28
Note. MEM = Measure of Emotions by Music.
aThe coefficients represent the semi-partial correlations.
bSubscales of the PANAS-X.
10 Psychology of Music 00(0)
CFA
We performed a CFA with the 32 MEM items each loading on its respective factor (Peaceful,
Happy, Scary, and Sad; eight items per factor). CFA results confirmed the validity of a four-factor
solution, with adequate model fit, χ2(458) = 640.09, p < .001; CFI = .96; TLI = .96; RMSEA = .04
(.03, .04). Bivariate correlations between the four factors are presented in Table 4, and param-
eter estimates are presented in Supplementary File 5 online.
Table 3. Comparisons of Regression Coefficients Models from the Stepwise Procedure and Bootstrap
one and Result of the Bayesian Regressions.
Coefficient
Unstandardized aUnstandardized b
MEM Peaceful
Serenitya .39 (.26, .31) .39 (.25, .51)
Self-assurance –.10 (–.17, –.03) –.10 (–.18, –.03)
Sadness –.10 (–.17, –.03) –.10 (–.16, –.4)
Bayesian regression R2.21
BFM140.89
1/BF10 2.3
MEM Happy
Joviality .23 (.18, .28) .23 (.18, .17)
Attentiveness –.12 (–.21, –.03) –0.12 (–.21, –.03)
Serenity –.16 (–.28, –.05) –.16 (–.27, –.06)
Sadness –.09 (–.15, –.03) –.09 (–.15, –.04)
Bayesian regression R2.34
BFM256.05
1/BF10 3.27
MEM Scary
Fear .11 (.06, .15) 0.11 (.05, .16)
Serenity –.13 (–.23, –.04) –.13 (–.23, –.05)
Attention .13 (.06, .20) .13 (.04, .21)
Joviality –.06 (–.10, –.02) –.06 (–.09, –.03)
Bayesian regression R2.29
BFM127.249
1/BF10 1.38
MEM Sad
Joviality –.13 (–.17, –.10) –.13 (–.18, –.09)
Sadness .17 (.12, .22) .16 (.11, .22)
Bayesian regression R2.42
BFM322.416
1/BF10 1.61
Note. MEM = Measure of Emotions by Music.
Unstandardized a = from stepwise regression; Unstandardized b = from bootstrap (n = 1,000). Numbers in parentheses
are confidence intervals 95%. Bayesian regression R2 = R square for the model. BFm = factor by which the odds in favor
of a specific model increase after observing data. 1/BF10 = probability of the model to fit the data compare with the next
best model (https://jasp-stats.org/2020/11/26/how-to-do-bayesian-linear-regression-in-jasp-a-case-study-on-teaching-
statistics/).
aPANAS-X subscales.
Hanigan et al. 11
Test–retest reliability: bisectional correlation (split half correlation Spearman–
Brown)
Reliability analysis based on the split half correlation (rSB) show that the four subscales of the
MEM have high reliability coefficients: Peaceful, 0.83; Happy, 0.87; Scary, 0.77; Sad, = 0.86.
Identification of emotion musical excerpts
Contingency coefficient values (standardized) revealed a very strong association between the
emotional excerpts and the corresponding adjectives. They varied from .875 to .930 (see in
Supplementary File 6 online)
Preference between the MEM or PANAS-X
The playfulness and overall enjoyable experience that music provides to measure emotional
states was one of our main rationale to validate the MEM. In order to examine whether partici-
pants favored words or music to assess emotions, we asked participants to select the question-
naire they preferred. Out of 275 participants, 69% preferred the MEM (5 non-responses).
Discussion
The main purpose was to develop and validate a new measure of emotion with music. Inspired
by the metacognitive model of Vempala and Russo (2018) stating that emotional judgments of
music depend on a interdependence between felt emotion and cognition, we had strong reasons
to believe that the emotional excerpts validated by Vieillard et al. (2008), based on the assump-
tion that emotional recognition is universal (Fritz et al., 2009), would allow participants to
identify their emotions as adequately as with words. Indeed, the psychometric properties of our
validated instrument showed good fidelity and validity.
Our results demonstrated the fidelity and the validity of the MEM through various means.
First, the internal consistency of the MEM showed acceptable to strong Cronbach’s alphas. This
suggests that participants chose musical excerpts in a consistent manner. This was also sup-
ported by CFA that demonstrated a clear four-factor structure. Second, construct validity illus-
trated that the four subscales of the MEM had good convergence with the corresponding
emotions of the PANAS-X. Moreover, the happy, scary, and sad excerpts had a negative relation-
ship with the opposite emotions in the PANAS-X, that is, sadness, r = –0.14; serenity, r = –0.15;
and joviality, r = –0.33. Finally, our factor analysis confirmed that the peaceful, happy, scary, or
sad excerpts were uniquely related to a single emotion factor. This provided even further evi-
dence of construct validity. In sum, the MEM has strong psychometric characteristics compara-
ble to those of the PANAS-X (Watson & Clark, 1994), confirming its validity.
Table 4. MEM Correlations Between Factors.
Peaceful Happy Scary Sad
Happy –.54***
Scary –.54*** –.32***
Sad –0.37*** –.72*** –.08
Note. MEM = Measure of Emotions by Music; n = 280.
***p < .001.
12 Psychology of Music 00(0)
Furthermore, to ensure the accuracy of the excerpts in measuring the same emotion, we
performed an inter-item fidelity analysis. The results revealed that the four excerpts were
strongly related within each category. That is, when participants choose an emotion, they are
more likely to select the same one again. As mentioned previously, this analysis offers a good
estimation of the reliability.
Secondary analyses have shown that there is a strong relationship between the choices of
excerpts and the corresponding emotions by adjectives. This step further supports that the par-
ticipants recognize the emotions of the excerpts. The last secondary analysis highlighted that
69% of participants preferred to assess their emotional state with music (through the MEM)
rather than through words. The inclination toward this new measure of emotions is not negli-
gible and could constitute a new and appreciated avenue to evaluate emotions in scientific
research.
Although other methods of assessing emotional states have been widely used in research for
decades, they have limitations that the MEM addresses. Most notably, self-reported verbal meas-
ures are not always as convenient to represent emotion, are at risk of response bias, of language
barriers, and are difficult to translate (Choi & Pak, 2004; Latkin et al., 2017; Näher & Krumpal,
2012; Zentner & Eerola, 2010). Using music, the socially associated meanings of words could
be avoided and no translation is required. One could argue that music is also influenced by cul-
tural norms, and this argument would be valid, but the excerpts used for the MEM were not
developed to appeal to a culture but to be used in research for emotion (Vieillard et al., 2008),
and were validated cross-culturally (Fritz et al., 2009).
The MEM addresses another limitation of previous measures of emotions, that is, the meas-
ures of facial expressions. Facial expressions are affected by culture and are not sufficient to
adequately infer emotional states (Barrett et al., 2019; Caldara, 2017). However, emotion in
music can be recognized cross-culturally (Balkwill et al., 2004; Fang et al., 2017; Fritz et al.,
2009). Therefore, the MEM is a very interesting tool for researching emotions with diverse sam-
ples and languages. The MEM also overcomes the shortcomings of behavioral observation, of
peripheric and indirect measures, by using a reliable and easy method to quantify response,
that is, through self-report (Barrett, 2006; Zentner & Eerola, 2010).
Limitations
We acknowledge that the MEM has certain limitations. First, the MEM has four sets of emo-
tional excerpts that are validated (Vieillard et al., 2008). In this sense, the small number of
emotions in the MEM reduces the choice of more refined emotional states. A likely avenue
would be to create new musical excerpts to target other emotions, or to use more emotional
categories of validated musical excerpts such as those in the Geneva Emotional Music Scales
(Zentner et al., 2008).
Another limitation is the likelihood that participants will recognize the musical excerpts
with repeated measures. Since the laboratory of Isabelle Peretz has 56 musical excerpts, some
of which we did not use, it would be possible to create other versions of the MEM to address this
limitation for longitudinal research. Also, because we had as exclusion criterion that anyone
with a mental health or hearing disorder (not corrected by a prosthesis) were excluded from the
research, our sample may not be representative of the general population, As well, our sample
was highly educated, young, and mostly female, which could have biased the results.
A further limitation is the timing of the data collection that happened during the COVID-19
pandemic. Considering all the adverse factors associated with the pandemic, there were con-
cerns about mental health and emotional state of people in general. As our participants were
Hanigan et al. 13
evaluated during this crisis, this may have affected their responses to the PANAS-X and, there-
fore, the association between the MEM and the PANAS-X. In this regard, we have provided, in
Supplementary File 4 online, a descriptive analysis comparing the responses to the MEM and
PANAS-X. In short, negative affects were significantly higher in our sample than the original
PANAS-X but these differences remain in the small range. As such, we believe that the COVID-
19 pandemic influenced to a certain degree the participant’s emotional state, mostly through
higher mean-level of negative emotional states. These differences do not seem to have affected
the reliability or the validity of our results. For instance, internal consistencies achieved with
our sample with regard to the PANAS-X are similar to those reported by Watson and Clark
(1994). Replicating the study in a non-pandemic context would be ideal, though a complete
return to normality for everyone may not happen for quite some time.
Implication for future research
As this study aimed to present a validation of the MEM to the research community, there are
many implications of our findings to future research. First, it would be interesting to test
whether the MEM facilitates the assessment of emotional states in populations with difficulties
in verbal expression of emotion, as is the case with ASC and alexithymia. Recent research sug-
gests that music may facilitate the identification of their emotional states (Akbari et al., 2021;
Lonsdale, 2019; Lyvers et al., 2020; Wagener et al., 2021). Indeed, neuroscience research
shows that the ASC brain processes music in the same way as neurotypical people (Caria et al.,
2011). In some cases, ASCs have better music perceptual abilities (Heaton, 2009) and their
perception of emotions in music is intact (Heaton et al., 2008). Also, a study has shown that
music therapy helps to decrease the symptoms of alexithymia, as it allows individuals with this
condition to express their emotions through music (Akbari et al., 2021). Moreover, Lyvers et al.
(2020) suggest that individuals with alexithymia use music to feel and regulate their emotions
in the same way that some do with illicit substances. Similarly, Lonsdale (2019) has shown that
people with alexithymia may use music to regulate their negative emotions. Therefore, music
could be a good alternative for assessing emotional states in these populations. Beyond using
the MEM as an assessment tool, we could consider using the musical excerpts included in the
MEM as a means to communicate emotions in a population of nonverbal ASC. Creating a
mobile application that uses the MEM could serve this purpose.
Third, our study has revealed that 69% of participants preferred to respond to the MEM
rather than the PANAS-X. The playfulness and originality of using music to assess emotions
could increase research participation rates and further capture participant’s attention during
data collection. Because the emotions of the excerpts used in our study are universally recog-
nized (Fritz et al., 2009), using the MEM instead of verbal measures of emotions could circum-
vent some biases with words such as language or cultural barriers. It is well known that it is
difficult to translate a questionnaire adequately and that long and complex translation proce-
dures must be used. Music used in lieu of a verbal questionnaire could overcome these difficul-
ties. Borrowing from Schopenhauer (1819/1909), we indeed believe that music would be “the
language of feeling and of passion, as words are the language of reason” (p. 339).
Conclusion
If music really did appear before language in human evolution (Rochon, 2018), it is surprising
that this study is the first to show that music can be used in the same way as words to assess
emotional states. We have shown in this study that the MEM is a reliable and valid
14 Psychology of Music 00(0)
questionnaire to evaluate emotional states. Our results showed that participants who reflected
on how they felt over the past 24 hr were able to consistently choose the same emotional cate-
gory of music clips. Similarly, the emotions conveyed by our musical excerpts were consistent
with those of one of the most widely used affective state questionnaire (PANAS-X, Watson &
Clark, 1994). This indicates that the emotional sense individuals give to music may be as accu-
rate as words, and may be used to assess emotion in scientific research. Instead of reading a
long list of adjectives to explain how one feels, this study opens the avenue for individuals to
listen and choose music instead. This study opens new research opportunities in the field of
music and emotions, and also may have practical implications for populations having difficul-
ties with the use of words to express their emotions.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or
publication of this article: This research received a grant from the “Faculté des sciences humaines” of the
Université du Québec à Montréal.
ORCID iDs
Éric Hanigan https://orcid.org/0000-0003-4288-015X
Arielle Bonneville-Roussy https://orcid.org/0000-0001-7909-8845
Supplemental material
Supplemental material for this article is available online.
Notes
1. https://peretzlab.ca/online-test-material/material/emotional-clips/. Copyright of the musical stim-
uli, ©Bernard Bouchard, 1998.
2. It is to be taken into consideration that the subscales of the MEM are created with dichotomous vari-
ables (to choose the excerpt vs not to choose it). One issue is whether the use of internal consistency
with a Cronbach’s alpha is justified with dichotomous variables. In this regard, Preston and Colman
(2000) conducted a study to test the optimal number of response choices for self-reported question-
naires. The results of their study did not reveal significant differences in internal consistency coef-
ficients between all types of measures. Furthermore, Dolnicar and Grün (2007) found no significant
difference in the tests reliability of the same questionnaires under different scales (dichotomous, ordi-
nal, and interval).
3. Maximum contingency coefficient = r
r
c
c
−−
11
4*, where r stands for rows and c for columns.
References
Akbari, R., Amiri, S., & Mehrabi, H. (2021). The effectiveness of music therapy on reducing alexithymia
symptoms and improvement of peer relationships. International Journal of Behavioral Sciences, 14(4),
178–184. https://doi.org/10.30491/ijbs.2021.214227.1186
Balkwill, L.-L., & Thompson, W. F. (1999). A cross-cultural investigation of the perception of emo-
tion in music: Psychophysical and cultural cues. Music Perception, 17(1), 43–64. https://doi.
org/10.2307/40285811
Balkwill, L.-L., Thompson, W. F., & Matsunaga, R. (2004). Recognition of emotion in Japanese, Western,
and Hindustani music by Japanese listeners1. Japanese Psychological Research, 46(4), 337–349.
https://doi.org/10.1111/j.1468-5584.2004.00265.x
Barrett, L. F. (2006). Solving the emotion paradox: Categorization and the experience of emotion. Personality
and Social Psychology Review, 10(1), 20–46. https://doi.org/10.1207/s15327957pspr1001_2
Hanigan et al. 15
Barrett, L. F., Adolphs, R., Marsella, S., Martinez, A. M., & Pollak, S. D. (2019). Emotional expressions
reconsidered: Challenges to inferring emotion from human facial movements. Psychological Science
in the Public Interest, 20(1), 1–68. https://doi.org/10.1177/1529100619832930
Beier, E. J., Janata, P., Hulbert, J. C., & Ferreira, F. (2022). Do you chill when I chill? A cross-cultural study
of strong emotional responses to music. Psychology of Aesthetics, Creativity, and the Arts, 16, 74–96.
https://doi.org/10.1037/aca0000310
Bigand, E., & Poulin-Charronnat, B. (2006). Are we “experienced listeners”? A review of the musical
capacities that do not depend on formal musical training. Cognition, 100(1), 100–130. https://doi.
org/10.1016/j.cognition.2005.11.007
Blaikie, N. (2003). Analyzing quantitative data. SAGE. https://doi.org/10.4135/9781849208604
Cacioppo, J., Berntson, G., Larsen, J., Poehlmann, K., & Ito, T. (2000). The psychophysiology of emotion.
In R. J. Lewis & J. M. Haviland-Jones (Eds.), The handbook of emotion (pp. 173–191). Guilford Press.
Caldara, R. (2017). Culture reveals a flexible system for face processing. Current Directions in Psychological
Science, 26(3), 249–255. https://doi.org/10.1177/0963721417710036
Caria, A., Venuti, P., & de Falco, S. (2011). Functional and dysfunctional brain circuits underlying
emotional processing of music in autism spectrum disorders. Cerebral Cortex, 21(12), 2838–2849.
https://doi.org/10.1093/cercor/bhr084
Chanda, M. L., & Levitin, D. J. (2013). The neurochemistry of music. Trends in Cognitive Sciences, 17(4),
179–193. https://doi.org/10.1016/j.tics.2013.02.007
Choi, B. C. K., & Pak, A. W. P. (2004). A catalog of biases in questionnaires. Preventing Chronic Disease,
2(1). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1323316/
Davies, S. (2010). Emotions expressed and aroused by music: Philosophical perspectives. In P. N. Juslin,
J. A. Sloboda, & J. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications (pp.
15–43). Oxford University Press.
DeWall, C., Baumeister, R., Chester, D., & Bushman, B. (2015). How often does currently felt emotion
predict social behavior and judgment? A meta-analytic test of two theories. Emotion Review, 8, 136–
143. https://doi.org/10.1177/1754073915572690
Dolnicar, S., & Grün, B. (2007). How constrained a response: A comparison of binary, ordinal and
metric answer formats. Journal of Retailing and Consumer Services, 14(2), 108–122. https://doi.
org/10.1016/j.jretconser.2006.09.006
Eerola, T. (2018). Music and emotions. In R. Bader (Ed.), Springer handbook of systematic musicology (pp.
539–554). Springer. https://doi.org/10.1007/978-3-662-55004-5_29
Etzel, J. A., Johnsen, E. L., Dickerson, J., Tranel, D., & Adolphs, R. (2006). Cardiovascular and respiratory
responses during musical mood induction. International Journal of Psychophysiology, 61(1), 57–69.
https://doi.org/10.1016/j.ijpsycho.2005.10.025
Fang, L., Shang, J., & Chen, N. (2017). Perception of western musical modes: A Chinese study. Frontiers in
Psychology, 8, Article 1905. https://doi.org/10.3389/fpsyg.2017.01905
Fazio, R. H., & Olson, M. A. (2003). Implicit measures in social cognition. research: Their mean-
ing and use. Annual Review of Psychology, 54, 297–327. https://doi.org/10.1146/annurev.
psych.54.101601.145225
Fritz, T., Jentschke, S., Gosselin, N., Sammler, D., Peretz, I., Turner, R., Friederici, A. D., & Koelsch, S.
(2009). Universal recognition of three basic emotions in music. Current Biology, 19(7), 573–576.
https://doi.org/10.1016/j.cub.2009.02.058
Gagnon, L., & Peretz, I. (2003). Mode and tempo relative contributions to “happy-sad” judgements of
equitone music. Cognition and Emotion, 17, 25–40. https://doi.org/10.1080/02699930302279
Genell, A., Nemes, S., Steineck, G., & Dickman, P. W. (2010). Model selection in medical research: A simu-
lation study comparing Bayesian model averaging and stepwise regression. BMC Medical Research
Methodology, 10, Article 108. https://doi.org/10.1186/1471-2288-10-108
Gosselin, N., Paquette, S., & Peretz, I. (2015). Sensitivity to musical emotions in congenital amusia.
Cortex, 71, 171–182. https://doi.org/10.1016/j.cortex.2015.06.022
Gosselin, N., Peretz, I., Johnsen, E., & Adolphs, R. (2007). Amygdala damage impairs emotion recog-
nition from music. Neuropsychologia, 45(2), 236–244. https://doi.org/10.1016/j.neuropsycholo-
gia.2006.07.012
16 Psychology of Music 00(0)
Gosselin, N., Peretz, I., Noulhiane, M., Hasboun, D., Beckett, C., Baulac, M., & Samson, S. (2005). Impaired
recognition of scary music following unilateral temporal lobe excision. Brain, 128(3), 628–640.
https://doi.org/10.1093/brain/awh420
Heaton, P. (2009). Assessing musical skills in autistic children who are not savants. Philosophical
Transactions of the Royal Society B: Biological Sciences, 364(1522), 1443–1447. https://doi.
org/10.1098/rstb.2008.0327
Heaton, P., Allen, R., Williams, K., Cummins, O., & Happé, F. (2008). Do social and cognitive deficits curtail
musical understanding? Evidence from autism and Down syndrome. British Journal of Developmental
Psychology, 26(2), 171–182. https://doi.org/10.1348/026151007X206776
Hodges, D. A. (2010). Psycho-physiological measures. In P. N. Juslin, J. A. Sloboda, & J. Sloboda (Eds.),
Handbook of music and emotion: Theory, research, applications (pp. 279–311). Oxford University Press.
Huggins, C. F., Donnan, G., Cameron, I. M., & Williams, J. H. G. (2020). A systematic review of how emo-
tional self-awareness is defined and measured when comparing autistic and non-autistic groups.
Research in Autism Spectrum Disorders, 77, Article 101612.
Izard, C. E., Libero, D. Z., Putnam, P., & Haynes, O. M. (1993). Stability of emotion experiences and their
relations to traits of personality. Journal of Personality and Social Psychology, 64(5), 847–860. https://
doi.org/10.1037//0022-3514.64.5.847
Jankélévitch, V. (2003). Music and the Ineffable (C. Abbate, Trans.). Princeton University Press. (Original
work published 1983).
Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review
and a questionnaire study of everyday listening. Journal of New Music Research, 33(3), 217–238.
https://doi.org/10.1080/0929821042000317813
Koelsch, S. (2014). Brain correlates of music-evoked emotions. Nature Reviews Neuroscience, 15(3), 170–
180. https://doi.org/10.1038/nrn3666
Koelsch, S., Fritz, T. V., Cramon, D. Y., Müller, K., & Friederici, A. D. (2006). Investigating emotion with
music: An fMRI study. Human Brain Mapping, 27(3), 239–250. https://doi.org/10.1002/hbm.20180
Konečni, V. J. (2008). Does music induce emotion? A theoretical and methodological analysis. Psychology
of Aesthetics, Creativity, and the Arts, 2(2), 115–129. https://doi.org/10.1037/1931-3896.2.2.115
Lang, P. J., Levin, D. N., Miller, G. A., & Kozak, M. J. (1983). Fear behavior, fear imagery, and the psycho-
physiology of emotion: The problem of affective response integration. Journal of Abnormal Psychology,
92(3), 276–306. https://doi.org/10.1037//0021-843x.92.3.276
Latkin, C. A., Edwards, C., Davey-Rothwell, M. A., & Tobin, K. E. (2017). The relationship between
social desirability bias and self-reports of health, substance use, and social network factors among
urban substance users in Baltimore, Maryland. Addictive Behaviors, 73, 133–136. https://doi.
org/10.1016/j.addbeh.2017.05.005
Lee, K. M., Lindquist, K. A., Arbuckle, N. L., Mowrer, S. M., & Payne, B. K. (2020). An indirect measure of
discrete emotions. Emotion, 20(4), 659–676. https://doi.org/10.1037/emo0000577
Lomas, T. (2021). Towards a cross-cultural lexical map of wellbeing. The Journal of Positive Psychology,
16(5), 622–639. https://doi.org/10.1080/17439760.2020.1791944
Lonsdale, A. J. (2019). Emotional intelligence, alexithymia, stress, and people’s reasons for listening to
music. Psychology of Music, 47(5), 680–693. https://doi.org/10.1177/0305735618778126
Lundqvist, L.-O., Carlsson, F., Hilmersson, P., & Juslin, P. (2009). Emotional responses to music:
Experience, expression, and physiology. Psychology of Music, 37(1), 61–90. https://doi.
org/10.1177/0305735607086048
Lyvers, M., Cotterell, S., & Thorberg, F. A. (2020). “Music is my drug”: Alexithymia, empa-
thy, and emotional responding to music. Psychology of Music, 48(5), 626–641. https://doi.
org/10.1177/0305735618816166
Marsh, H. W., Hau, K.-T., & Wen, Z. (2004). In search of golden rules: Comment on hypothesis-test-
ing approaches to setting cutoff values for fit indexes and dangers in overgeneralizing Hu and
Bentler’s (1999) findings. Structural Equation Modeling, 11(3), 320–341. https://doi.org/10.1207/
s15328007sem1103_2
Mauss, I. B., & Robinson, M. D. (2009). Measures of emotion: A review. Cognition and Emotion, 23(2),
209–237. https://doi.org/10.1080/02699930802204677
Hanigan et al. 17
Mcnair, D., Lorr, M., & Droppleman, L. F. (1971). Manual for the Profile of Mood States. Educational and
Industrial Testing Service.
Mitterschiffthaler, M. T., Fu, C. H. Y., Dalton, J. A., Andrew, C. M., & Williams, S. C. R. (2007). A func-
tional MRI study of happy and sad affective states induced by classical music. Human Brain Mapping,
28(11), 1150–1162. https://doi.org/10.1002/hbm.20337
Näher, A.-F., & Krumpal, I. (2012). Asking sensitive questions: The impact of forgiving wording and
question context on social desirability bias. Quality & Quantity, 46(5), 1601–1616. https://doi.
org/10.1007/s11135-011-9469-2
Nater, U. M., Abbruzzese, E., Krebs, M., & Ehlert, U. (2006). Sex differences in emotional and psychophysi-
ological responses to musical stimuli. International Journal of Psychophysiology, 62(2), 300–308.
https://doi.org/10.1016/j.ijpsycho.2006.05.011
Nordström, H., & Laukka, P. (2019). The time course of emotion recognition in speech and music. The
Journal of the Acoustical Society of America, 145(5), Article 3058. https://doi.org/10.1121/1.5108601
Peretz, I., Blood, A. J., Penhune, V., & Zatorre, R. (2001). Cortical deafness to dissonance. Brain: A Journal
of Neurology, 124(5), 928–940. https://doi.org/10.1093/brain/124.5.928
Peretz, I., Gagnon, L., & Bouchard, B. (1998). Music and emotion: Perceptual determinants, immediacy,
and isolation after brain damage. Cognition, 68(2), 111–141. https://doi.org/10.1016/S0010-
0277(98)00043-2
Preston, C. C., & Colman, A. M. (2000). Optimal number of response categories in rating scales: Reliability,
validity, discriminating power, and respondent preferences. Acta Psychologica, 104(1), 1–15. https://
doi.org/10.1016/S0001-6918(99)00050-5
Rochon, M. (2018). Le cerveau & la musique: Une odyssée fantastique d’art et de science [The brain & music:
A fantastic odyssey of art and science]. Éditions MultiMondes.
Särkämö, T. (2018). Cognitive, emotional, and neural benefits of musical leisure activities in aging and
neurological rehabilitation: A critical review. Annals of Physical and Rehabilitation Medicine, 61(6),
414–418. https://doi.org/10.1016/j.rehab.2017.03.006
Schopenhauer, A. (1909). World as will and Idea (Vol. 1, 7th ed.) (R. B. Haldane & J. Kemp, Trans.). Kegan
Paul, Trench, Trübner & Co. (Original work published 1819)
Schwarz, N., & Clore, G. L. (2007). Feelings and phenomenal experiences. In A. W. Kruglanski & E. Tory
Higgins (Eds.), Social psychology: Handbook of basic principles (2nd ed., pp. 385–407). Guilford Press.
Siegel, E. H., Sands, M. K., Van den Noortgate, W., Condon, P., Chang, Y., Dy, J., Quigley, K. S., & Barrett, L.
F. (2018). Emotion fingerprints or emotion populations? A meta-analytic investigation of autonomic
features of emotion categories. Psychological Bulletin, 144(4), 343–393. https://doi.org/10.1037/
bul0000128
Sloboda, J. A., & Juslin, P. N. (2010). At the interface between the inner and outer world: Psychological
perspectives. In P. N. Juslin, J. A. Sloboda, & J. Sloboda (Eds.), Handbook of music and emotion: Theory,
research, applications (pp. 73–97). Oxford University Press.
Stemmler, G. (2004). Physiological processes during emotion. In P. Philippot & R. S. Feldman (Eds.), The
regulation of emotion (pp. 33–70). Psychology Press. https://doi.org/10.4324/9781410610898
Trainor, L. J., Tsang, C. D., & Cheung, V. H. W. (2002). Preference for sensory consonance in 2- and 4-month-
old infants. Music Perception, 20(2), 187–194. https://doi.org/10.1525/mp.2002.20.2.187
Vempala, N. N., & Russo, F. A. (2018). Modeling music emotion judgments using machine learning
methods. Frontiers in Psychology, 8, Article 2239. https://doi.org/10.3389/fpsyg.2017.02239
Vieillard, S., Peretz, I., Gosselin, N., Khalfa, S., Gagnon, L., & Bouchard, B. (2008). Happy, sad, scary and
peaceful musical excerpts for research on emotions. Cognition and Emotion, 22(4), 720–752. https://
doi.org/10.1080/02699930701503567
Vuoskoski, J., & Eerola, T. (2011). Measuring music-induced emotion. Musicae Scientiae, 15(2), 159–173.
https://doi.org/10.1177/1029864911403367
Wagener, G., Berning, M., Costa, A., Steffgen, G., & Melzer, A. (2021). Effects of emotional music on
facial emotion recognition in children with autism spectrum disorder (ASD). Journal of Autism and
Developmental Disorders, 51, 3256–3265. https://doi.org/10.1007/s10803-020-04781-0
Watson, D., & Clark, L. A. (1994). The PANAS—X: Manual for the Positive and Negative Affect Schedule–
Expanded Form [Data set]. University of Iowa. https://doi.org/10.17077/48vt-m4t2
18 Psychology of Music 00(0)
Watson, D., Clark, L. A., & Tellegen, A. (1988). Development and validation of brief measures of positive
and negative affect: The PANAS scales. Journal of Personality and Social Psychology, 54(6), 1063–
1070. https://doi.org/10.1037//0022-3514.54.6.1063
Witvliet, C. V. O., & Vrana, S. R. (2007). Play it again Sam: Repeated exposure to emotionally evocative
music polarises liking and smiling responses, and influences other affective reports, facial EMG, and
heart rate. Cognition and Emotion, 21(1), 3–25. https://doi.org/10.1080/02699930601000672
Zatorre, R. J., & Salimpoor, V. N. (2013). From perception to pleasure: Music and its neural substrates.
Proceedings of the National Academy of Sciences of the United States of America, 110(Suppl. 2), 10430–
10437. https://doi.org/10.1073/pnas.1301228110
Zentner, M., & Eerola, T. (2010). Self-report measures and models. In P. N. Juslin, J. A. Sloboda, & J.
Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications (pp. 187–221). Oxford
University Press.
Zentner, M., Grandjean, D., & Scherer, K. R. (2008). Emotions evoked by the sound of music: Characterization,
classification, and measurement. Emotion, 8(4), 494–521. https://doi.org/10.1037/1528-3542.8.4.494