Content uploaded by Nikki Rickard
Author content
All content in this area was uploaded by Nikki Rickard on Apr 28, 2014
Content may be subject to copyright.
Musicae Scientiae
16(3) 340 –356
© The Author(s) 2012
Reprints and permission: sagepub.
co.uk/journalsPermissions.nav
DOI: 10.1177/1029864912459046
msx.sagepub.com
459046MSX16310.1177/102986
4912459046Musicae ScientiaeTesoriero and Rickard
2012
Corresponding author:
Nikki Sue Rickard, Monash University, Wellington Rd, Melbourne, 3800, Australia
Email: nikki.rickard@monash.edu
Music-enhanced recall: An effect of
mood congruence, emotion arousal
or emotion function?
Michael Tesoriero and Nikki Sue Rickard
Monash University, Australia
Abstract
Research on whether music facilitates recall has been inconsistent and has lacked a theoretical basis.
Three competing emotion-based theories yield differential predictions dependent on arousal levels, mood
congruence, and functional relevance of information respectively. The aim of this study was to determine
the most informative framework to understand the effect of emotion-inducing music on the short-term
recall of information about narratives. Ninety-five participants (range = 18–58 years) were randomly
allocated to one of four groups differentiated by the type of music presented to them, which was either
happy (n = 26), sad (n = 19), fearful (n = 25), or calm (n = 25). Participants listened to music, followed
by a positively or negatively emotionally-valenced narrative, and free recall of the narrative was tested
approximately five minutes later. The results provided strongest support for the mood congruence theory
in this context. After exposure to positive music, recall of positive information was significantly greater
than recall of negative information. Mood regulation ability moderated this effect, with symmetrical mood
congruence observed in participants with a tendency to repair their negative moods. Music may therefore
offer an effective means of facilitating encoding of information when the mood induced by preceding
music is congruent with the valence of information learnt. While the arousal and function theories may
be more informative in other contexts (for instance, when music is played following learning or longer-
term recall is tested), the current findings may help to clarify some of the inconsistencies previously
observed in the research on music-facilitated recall.
Keywords
arousal, congruence, emotion, encoding, function, mood regulation, music, short-term recall
Music listening is generally agreed to be a powerful moderator of emotional states (Eich, Ng,
Macaulay, Percy, & Grebneva, 2007; Juslin & Laukka, 2004; Sloboda & O’Neill, 2001). Given
that music listening influences emotional states1 and that emotional states influence the
encoding of information (Bower & Forgas, 2000, 2001; Cahill & McGaugh, 1998; Levine &
Pizarro, 2004, 2006), it follows that music listening may have the potential to influence the
encoding of information. However, there has been limited research investigating the effect of
music on encoding and the findings have been inconsistent (see Rickard, Toukhsati, & Field,
Article
Tesoriero and Rickard 341
2005). This may be partially a result of the absence of a clear and consistent theoretical
framework guiding this research (Juslin & Laukka, 2004; Sloboda & Juslin, 2001). In this
context, an emotion-based theoretical framework may provide some insight (e.g., Thompson,
Schellenberg, & Husain, 2001).
Three main explanations for the effect of emotional states on encoding are notable in the
emotion literature, and are distinguished by the conditions under which recall will be facili-
tated. The emotional arousal theory predicts enhanced recall of information when the partici-
pant is emotionally aroused (Cahill & McGaugh, 1998), while the mood congruence theory
predicts enhanced recall of information that is congruent with the emotional valence of the
participant (Bower & Forgas, 2000, 2001). In contrast, the function theory predicts enhanced
recall of information that is functionally relevant to the emotional state of the participant
(Levine & Burgess, 1997; Levine & Pizarro, 2004).
Emotional arousal theory
The emotional arousal theory explains the effect of emotional states on memory as mediated by
neurobiological mechanisms that accompany emotional arousal (Cahill & McGaugh, 1998).
Emotional states activate hormonal and neural mechanisms that are not engaged in emotion-
ally neutral states (Cahill & McGaugh, 1998). When emotions are evoked, stress hormones are
often released, which act on receptors in the amygdala to modulate long-term memory storage.
In empirical studies that provide support for this theory, par ticipants simultaneously view slides
and listen to an emotionally neutral story or an emotionally arousing story (Cahill & McGaugh,
1995). Memory for the story is tested, using free recall and/or recognition, at various times
after listening to the story. The emotionally arousing story is typically remembered significantly
better than the emotionally neutral story. It is noteworthy that the emotionally arousing story,
usually a car accident, has typically been negative in emotional valence (Talarico, Berntsen, &
Rubin, 2009). To test whether emotional arousal enhances learning of information when the
participant is emotionally aroused irrespective of emotional valence would provide stronger
support for this theory.
Mood congruence theory
The mood congruence theory explains the effect of emotional states on encoding as a result of
congruence between the emotional state of the participant and the positivity or negativity of
the information presented (Bower & Forgas, 2000, 2001). The mood of the participant pro-
motes selective encoding of information that has a similar valence (Eich & Forgas, 2003),
purportedly due to a closer relatedness within an associative network (Bower, 1981). When an
emotional state is evoked, concepts that are associated with that emotion valence become
primed and readily available for use (Bower, 1983). This affective priming promotes processing
of information that is congruent with the emotional valence of the participant (Bower, 1983,
1992; Bower & Forgas, 2000; Eich & Forgas, 2003). In empirical studies supporting this the-
ory, participants are first induced into an emotional state, and then are requested to partici-
pate in an apparently unrelated study during which information is presented (Bower & Forgas,
2000). Later, when participants are in a neutral mood, their memory for the information is
tested. Enhanced recall typically occurs for information that is congruent with the mood of
the participant at encoding as compared with incongruent moods (Ellis & Moore, 1999).
342 Musicae Scientiae 16(3)
Interestingly, mood-congruent encoding appears to be asymmetrical, so that it appears to
be stronger for positive than negative valence conditions (see Blaney, 1986; Forgas, 1995;
Singer & Salovey, 1988 for reviews), a bias which may be due to a tendency to regulate emo-
tions (Rusting, 1998, 2001). That is, some participants may be more motivated to maintain
positive moods and repair negative moods, diminishing mood congruence in the negative
condition. Trait differences in mood regulation may moderate mood congruence; partici-
pants with a high tendency to regulate their mood would be more likely to demonstrate this
asymmetry.
Function theory
The effect of mood on cognition may also depend on processing strategy (Forgas, 1999).
Positive moods are thought to lead to more heuristic processing strategies with open, cre-
ative, and inclusive processing solutions (Fiedler, 2001; Forgas, 1992; Fredrickson, 2001;
Isen, 1999). In contrast, negative moods are thought to lead to more systematic and ana-
lytic processing strategies (Fiedler, 2001; Forgas, 1992). The function theory proposes that
the effect of emotional states on encoding is related to the functions of the basic emotions
(Levine & Burgess, 1997; Levine & Edelstein, 2009; Levine & Pizarro, 2004, 2006). Each
basic emotion is argued to have a unique antecedent event and subsequent unique moti-
vations and action tendencies that organize thought and behaviour (Lazarus, 1991).
These different action tendencies imply differential information processing and encoding
of information for each basic emotion (Levine & Burgess, 1997; Levine & Pizarro, 2004,
2006). When events are goal congruent, happiness is evoked and there is no need for prob-
lem solving; attention is broad and likely to result in a general facilitation of the encoding
of incoming information (Levine & Burgess, 1997). However, when events are goal incon-
gruent, a negative emotion is evoked and there is a need for problem solving and increased
attention to goal relevant information (Levine & Burgess, 1997). Thus, if fear motivates
avoidance from the threat to a goal, then people may selectively encode information asso-
ciated with the threat. Similarly, if sadness motivates withdrawal from a goal and reflection
on the loss, then people may selectively encode information associated with the outcome of
the goal loss. Finally, if anger motivates overcoming the obstacle to a goal, then people may
selectively encode information associated with the obstacle.
Experimental studies investigating the differential encoding of information for each
emotional state are, however, limited. Levine and Burgess (1997) directly investigated the
effects of happiness, anger, and sadness on the encoding of the different types of informa-
tion (i.e., setting, goal, agent, outcome, consequences) in a narrative. The emotional states
were evoked by randomly assigning high or low grades to students, after which they lis-
tened to a narrative about a student’s life at university, then completed a four-minute
distraction task, and finally their free recall of the narrative was tested. It was found that
participants who rated feeling happy demonstrated enhanced recall for both goals and
outcomes. Those who felt sad showed enhanced recall for outcomes while those who felt
angry showed enhanced recall for goals, but not agents as expected. However, the study
was limited by a failure to control for the quantity of information across the various infor-
mation types and this may have confounded the results. Nonetheless, the experimental
paradigm employed by Levine and Burgess (1997) provides a useful basis from which to
explore the explanatory power of each theory to understanding the effect of music on
encoding.
Tesoriero and Rickard 343
Test of three theories
Importantly, these predominant theories for explaining emotion-enhanced recall have
yet to be contrasted within a single study. While such an approach would provide power-
ful evidence for the utility of one theory over another, several key criteria would nonethe-
less need to be satisfied within such an experiment’s design. First, valence and arousal
would each need to be independently manipulated while the other variable is held con-
stant to contrast the arousal and mood congruence theories (Gayle, 1997). Second to test
the function theory, the basic emotions (e.g., happy, sad, fear) would also need to be dif-
ferentially induced, as occurred in the study by Levine and Burgess (1997). Music is
capable of inducing basic emotions that vary in arousal and valence (e.g., Kreutz, Ott,
Teichmann, Osawa, & Vaitl, 2008) satisfying the criteria to test effectively the contribu-
tion of each theory to understanding the effect of music on encoding. A further require-
ment to test the utility of one theory over another would be to present a sufficient scope
of information. While the arousal theory predicts that arousal will facilitate recall of all
types of information, the mood congruence theory predicts facilitation on the basis of
information valence, so both positive and negative valence information should be pre-
sented. Finally, the function theory predicts facilitation on the basis of information rele-
vance, and therefore goal relevant information (i.e., goal, agent, threat, and outcome)
should be presented.
The current study
The aim of the current study was to examine the explanatory power of these three key
theories in understanding the effect of emotion-inducing music on encoding of valenced
information. To achieve this within a single study, each of the recommendations outlined
above was adopted. In addition, to increase external validity for the induction of emotion
by music, an online experimental context was used to enable participants to experience
music in a non-laboratory setting and to expand recruitment diversity beyond the typical
university sample (Reips, 2002). Participants listened to music excerpts intended to
induce happiness, sadness, fear or calmness. They then listened to a narrative which
consisted of four episodes (i.e., two positive and two negative valence) that occurred dur-
ing a student’s life. Each episode included different types of goal relevant information:
goal, agent, threat, and outcome. After distraction tasks, participants completed a free
recall test of the narrative.
According to the function theory, participants exposed to the sad music should demon-
strate higher mean recall for the outcome information than participants exposed to any
other music, and participants exposed to the fearful music should demonstrate higher
mean recall for the threat information than participants exposed to any other music.
According to the mood congruence theory, however, mean recall scores for the positive sto-
ries should be highest in the positive music condition (i.e., happy and calm), and mean
recall for the negative stories should be highest in the negative music condition (i.e., sad
and fearful). In addition, according to the mood regulation theory, it was hypothesized that
participants with high mood regulation scores would demonstrate greater mood congru-
ence in the positive valence condition. According to the emotional arousal theory, high
arousal music (i.e., happy and fearful) should result in higher mean recall for all types of
information than low arousal music (i.e., sad and calm). (See Figure 1 for summary of
theoretical expectations).
344 Musicae Scientiae 16(3)
Method
Participants
The sample consisted of 95 participants, 27 males and 68 females, (M = 26.56 years, SD =
9.63 years, range = 18–58 years) that were convenience sampled from advertisements in the
local community, the Monash University community, and on internet discussion boards. The
participants were randomly allocated to one of four groups differentiated by the type of music
presented: happy (n = 26), sad (n = 19), fearful (n = 25), or calm (n = 25). Participants aged
between 17 and 60 years without any hearing impairments were eligible to participate.
Participants were also requested to abstain from consuming any stimulants or depressants for
2 hours prior to the study.
Materials
Pre-intervention measures between groups. Prior to the music intervention, the following variables
were rated by participants on 5-point Likert-type scales: current emotional arousal level (i.e.,
how calm or excited they were feeling), current emotional valence level (i.e., how unpleasant or
pleasant they were feeling); and on 4-point Likert-type scales: stimulant intake level (i.e., the
amount of time since their last intake of caffeine or another stimulant), and depressant intake
level (i.e., the amount of time since their last intake of alcohol or another depressant).
Music excerpts. The music excerpts were selected from a pool of music pieces that successfully
induced the intended emotional states (i.e., happiness, sadness, fear, or calmness; see Appendix) in
previous studies (Kreutz et al., 2008; Krumhansl, 1997; Mayer, Allen, & Beauregard, 1995; Mit-
terschiffthaler et al., 2007; Panksepp & Bekkedal, 1997; Pelletier, 2004). The intended emotional
Figure 1. Summary of hypothesized effects of music (happy, fearful, calm or sad) on recall for different types
of narrative content, according to the Emotional Arousal Theory, Mood Congruence Theory (# dotted line
shows hypothesized recall levels of High Mood repair individuals) and Function Type Theory.
Tesoriero and Rickard 345
state induction of three of these music excerpts was also verified for a population representative of
that recruited in the current study via a pilot study. The duration of each music excerpt was less
than 3 minutes,2 and each music excerpt was edited to have a 1-second fade in and fade out using
Sony Sound Forge 7.0. The music excerpts were normalized at 89 dB using MP3Gain 1.2.5. A
silence condition was not included in the current study to ensure relatively consistent experiment
duration and auditory stimulation for all participants. Participants were instructed to rate on
5-point Likert-type scales how they felt when they had been listening to the music, specifically on
discrete emotion scales (i.e., happy, sad, angry, fearful, calm, disgust, and bored) and dimensional
emotion scales (i.e., arousal and valence), as well as their familiarity and preference for the music.
Narrative. The information to be recalled was presented in the form of a short narrative which
was adapted from a study by Levine and Burgess (1997) with permission from L. Levine (per-
sonal communication, 5 May 2009). The audio narrative consisted of four episodes that occur red
during a student’s life. There were two positive episodes (going on a ski trip and receiving the
highest grade in an exam) and two negative episodes (missing out on going to a concert and fail-
ing a subject). Analysis of memorability across stories in the current study showed no difference
in overall recall of the four episodes; F(3, 282) = 1.43, p =.23. Each episode included five types
of information: (1) the setting; (2) the goal of the protagonist; (3) the agent whose action was
congruent or incongruent with the goal; (4) the threat to the goal; and (5) the outcome, that is,
whether the goal was attained or not. Each episode consisted of five sentences of similar length,
one for each type of information, which had been adapted from Levine and Burgess (1997) to
address the dissimilar sentence lengths. The duration of the nar rative was 2:05 minutes, and the
narrative was edited to have a 1 second fade in and fade out. The audio of the female narrator
was transduced using a Microsoft LifeChat LX-2000 microphone. The narrative was recorded
and edited using Sony Sound Forge 7.0, and was normalized at 89 dB using MP3Gain 1.2.5.
Images. The images used in a distracter task were obtained from the International Affective
Picture System (IAPS; Lang, Bradley, & Cuthbert, 2005), which is a set of standardized affective
images. Four images were selected for emotion neutrality (arousal ratings between 4.8 and 5.4,
and valence ratings between 3.3 and 4.3; each on a scale from 1 to 9).
Trait Meta-Mood Scale. The Trait Meta-Mood Scale is a 30-item self-report measure of mood
experience (TMMS; Salovey, Mayer, Goldman, Turvey, & Palfai, 1995). Of the three subscales,
the subscale of interest in this study was the mood repair scale. The mood repair scale measures
the degree to which individuals seek to maintain pleasant moods and repair unpleasant ones.
The internal consistency of this scale was good (Cronbach’s alpha = .82; Salovey et al., 1995)
and it has exhibited good construct validity; it was negatively correlated with the Center for
Epidemiological Studies Depression Scale (Salovey et al., 1995), and positively with emotion
regulation strategies: distraction and reappraisal (John & Gross, 2007). In the current study, the
internal consistency of the mood repair subscale, as measured by Cronbach’s alpha, was .77.
Free recall scoring sheet. An 80-item scoring sheet was developed to code the free recall responses
for the four episodes (two positive and two negative) and the five types of information (setting,
goal, agent, threat, and outcome; each of which consisted of four short phrases), in line with
the coding method used by Levine and Burgess (1997). Coders who were blind to the music
conditions and the emotion ratings of the participants recorded whether each phrase was pres-
ent or absent, with one point allocated for the presence of a phrase. To be allocated a point, the
phrase could be exactly as stated by the narrator, a synonym, or a closely related phrase. This
346 Musicae Scientiae 16(3)
method of scoring demonstrated high inter-rater reliability (.93-.98 across the different types
of information).
Procedure
The experiment was administered online, and accessed by participants via provision of a
web address and a password. Participants were required to wear headphones for the dura-
tion of the experiment, which enabled them to listen to the music and then the narrative,
and to avoid distractions that may have varied considerably in the non-laboratory setting.
They were informed of all procedures, although the complete aim of the experiment was
concealed from the participants to avoid demand characteristics. Instead they were informed
that the aim was to investigate differences in the way people process visual and auditory
information.
After logging onto the website, participants were provided with another brief introduction to
the study and a detailed explanatory statement. They first completed the pre-intervention mea-
sures, and were then instructed to listen to an audio sample (duration: 4s) and to adjust the
volume to a level comfortable to them. The following tasks were timed. Participants were
instructed to close their eyes as they listened to the music (happy, sad, fearful or calm) and then
the narrative, and open their eyes when it had finished. Next two distraction tasks (two minutes
each) required participants to type out objects and sounds that they noticed the last time they
went shopping in order to prevent rehearsal and ensure the narrative was no longer in short-
term memory. Participants then spent another two minutes viewing emotionally neutral
images to induce a neutral mood to reduce the influence of the emotional state at the time of
retrieval. In the free recall phase, participants were instructed to type out what they could recall
from listening to the narrative (untimed). Finally, participants rated the music excerpt that they
had heard on categorical and dimensional measures of emotion, familiarity, and preference,
completed the mood repair scale of the TMMS, and then were debriefed.
Results
Data set screening
There were 138 cases in the original data set. However, due to technical difficulties presenting
the audio material on certain computers, data were incomplete in 43 cases, resulting in a final
data set of 95. It is advisable in online studies to also screen the data set for unengaged respond-
ers (Reips, 2002). Two measures were used in the current study to determine participant
engagement with the experiment: the time taken to complete the experiment and the data
provided in the distraction tasks (Reips, 2002). No negative outliers were present on the exper-
iment duration variable, indicating that all participants spent a reasonable amount of time
completing the experiment. Similarly, all participants completed the distraction tasks indicat-
ing that participants were engaged with the experiment and that rehearsal was prevented.
Alpha was set at .05 and assumptions for all tests were satisfied unless otherwise specified.
Pre-intervention differences between groups
The potential differences between the groups prior to the music intervention were assessed using
Kruskal-Wallis ANOVAs because the assumption of normality was violated for all rating scales.
Participant groups did not differ on their self-reported levels of current arousal,
Tesoriero and Rickard 347
Table 1. The means and standard deviations of the categorical, dimensional, familiarity, and preference
ratings as a function of music excerpt.
Ratings Music excerpt
Happy*Sad*Fear*Calm*
Happy┼ 3.73 (1.28) A2.79 (1.32) 2.04 (1.34) 3.48 (1.26)
Sad┼┼ B1.19 (0.40)a2.42 (1.17) 1.72 (0.84) 2.12 (1.27)d
Fear┼ ┼ C1.31 (0.68)a1.63 (1.21) 2.60 (1.41) C1.32 (0.69)d
Calm┼┼ D2.73 (1.40)a3.53 (1.31) 2.08 (1.32) 3.64 (1.35)
Anger┼┼1.42 (0.99)a1.11 (0.46)b1.88 (1.09)c1.32 (0.90)d
Disgust 1.04 (0.20)a1.21 (0.63)b1.32 (0.85)c1.12 (0.44)d
Bored 2.08 (1.06)a1.63 (0.83)b2.00 (1.16) 2.04 (1.24)d
Arousal┼┼2.05 (1.14) E1.84 (0.83) 2.80 (1.08) E1.84 (0.90)
Valence┼┼3.42 (1.33) 3.68 (1.00) F2.88 (1.09) 3.64 (1.08)
Familiarity 3.58 (1.45) 3.32 (1.29) 3.04 (1.34) 2.88 (1.36)
Preference┼┼3.58 (1.72) 3.79 (0.79) G2.72 (1.17) 3.72 (0.94)
Note. The rating scale was 1 (low) to 5 (high).
For the categorical ratings, bold indicates the expected high correspondence between the intended emotion and the
emotion rating. For the dimensional ratings, bold indicates those music excerpts expected to be rated highest on that
dimension. Non-parametric tests were used because the assumption of normality was violated for 41 of the 44 emotion
rating scales (4 music conditions × 11 music rating scales; K-S test), and the assumption of homogeneity of variance was
violated for 4 of the 11 music rating scales as indicated by the Levene Test (p < .05).
┼= Kruskall Wallis tests on the emotion ratings between the music excerpts revealed that there was a significant differ-
ence at α < .05.
A, B, C, D, E, F, G= Post-hoc pair-wise comparisons (i.e., Mann Whitney) revealed that these ratings were significantly dif-
ferent from the other ratings in bold at a Bonferonni adjusted value of α < .01.
*= Friedman tests on the emotion ratings within the groups revealed that there was a significant difference at α < .05.
a, b, c, d= Post-hoc pair-wise comparisons (i.e., Wilcoxon) revealed that these ratings were significantly different from the
other ratings in bold at a Bonferonni adjusted value of α < .01.
χ
2
(3, n = 95) = 6.44, p =.09, current valence, χ
2
(3, n = 95) = 4.48, p =.21, or self-reported levels
of stimulant intake χ
2
(3, n = 95) = 2.83, p =.42, or depressant intake χ
2
(3, n = 95) = .29, p =.96.
Verification of music excerpts. The means and standard deviations of the categorical, dimen-
sional, familiarity, and preference ratings for each music excerpt are summarized in Table 1.
Table 1 indicates that across groups, each of the music excerpts induced the highest rating on
the intended emotion category, and that this rating was significantly different from at least one
of the other music excerpts. Table 1 also indicates that within groups, the music excerpts induced
the highest overall rating on the intended emotion category, and that this rating was signifi-
cantly different from at least two of the other emotion categories. The categorical ratings of
happy, χ2(3, n = 95) = 20.89, p < .001; sad, χ2(3, n = 95) = 17.40, p < .001; fearful, χ2(3,
n = 95) = 21.89, p < .001; calm, χ2(3, n = 95) = 21.89, p < .001; and angry, χ2(3, n = 95) =
12.19, p =.01, were significantly different across music excerpts, while the ratings of disgust
and bored were not. The dimensional ratings of arousal, χ2(3, n = 95) = 13.96, p =.003, and
valence, χ2(3, n = 95) = 7.69, p =.05, were also significantly different. The sad music excerpt
was, however, not rated low on the valence dimension. While there were no significant differ-
ences in familiarity ratings across excerpts, the preference ratings were significantly different,
χ2(3, n = 95) = 13.42, p =.004, with the fearful music excerpt rated as less preferred than the
other music excerpts.
348 Musicae Scientiae 16(3)
Analyses of recall data: Test of three theories
Function theory. To test the function theory of emotion-facilitated recall, the total number of
correct responses for recall of setting, goal, threat, outcome, and agent information was col-
lated. A 4 × 5 mixed model ANOVA revealed that the interaction between the music type and
the information type was not significant, F(12, 364) = 1.11, p =.35, η2 = .04. The main effect
of music type on recall was significant, F(3, 91) = 2.74, p =.05, η2 = .08, which appeared to be
due to the lower recall in the happy music condition (Tukey’s HSD post-hoc tests approached
significance for happy compared to fearful, p < .07, and calm, p < .08). A significant main effect
for information type was also found, F(4, 364) = 24.23, p < .001, η2 = .21. Pair-wise compari-
sons with a Bonferroni correction revealed that agent information (M = 7.16, SD = 3.69) was
recalled significantly better than setting (M = 4.48, SD = 3.07), goal (M = 4.90, SD = 3.22),
threat (M = 5.22, SD = 4.19), and outcome (M = 5.36, SD = 3.61) information.
Mood congruence theory. To test the mood congruence theory of emotion-facilitated recall, the
numbers of correct responses for recall of the positive stories and the negative stories were col-
lated. The music types were then combined into a positive valence music group (happy and
calm; n = 51) and a negative valence music group (sad and fearful; n = 44). The means and
standard error of the mean for recall across each music condition for negative and positive sto-
ries are summarized in Figure 2.
Figure 2 suggests that there was an interaction between music valence and narrative
valence. After exposure to negative music, the mean recall of negative stories appears to be
higher than the mean recall of positive stories. In contrast, after exposure to positive music,
the mean recall of positive stories appears to be higher than the mean recall of negative
stories. A 2 × 2 mixed model ANOVA revealed that there was a significant interaction
between the music type and the information type, F(1, 93) = 6.98, p =.01, η2 = .07. Post-
hoc repeated measures t-tests confirmed that after exposure to positive music, the recall of
positive stories (M = 13.67, SD = 8.43) was significantly greater than the recall of negative
stories (M = 11.57, SD = 7.71), t(50) = 2.12, p =.04, r2 = .08, but that after exposure to
negative music, the recall of negative stories (M = 15.66, SD = 8.22) was not significantly
greater than the recall of positive stories (M = 13.63, SD = 9.11), t(43) = 1.65, p =.11,
Figure 2. Mean scores on the recall test as a function of music and information valence. Error bars indicate
standard error of the mean.
Tesoriero and Rickard 349
r2 = .06. There was no significant main effect of either music type or information type on
recall.
Mood regulation. To investigate whether capacity for mood regulation moderated mood
congruence, mood repair scale scores were categorized via a median split (median = 22.003)
into high mood repair (n = 55) and low mood repair groups (n = 40). The analysis to test mood
congruence was repeated with the inclusion of mood repair as a moderating variable. The 2 ×
2 × 2 mixed model ANOVA revealed that the interaction between mood repair, music valence,
and narrative valence was significant, F(1, 91) = 5.37, p =.02, η2 = .06 (see Figure 3). For those
high on mood repair, the interaction between the music valence and the narrative valence was
significant, F(1, 53) = 14.33, p < .001, η2 = .21. Post-hoc repeated measures t-tests revealed
that for those high on mood repair, recall of positive stories (M = 16.17, SD = 8.47) was
significantly greater than the recall of the negative stories (M = 12.53, SD = 8.32) after exposure
to positive music, t(29) = 2.95, p =.006, r2 = .23, two-tailed, while recall of negative stories
Figure 3. Mean scores on the recall test as a function of music and information valence for participants (A)
low on the mood repair scale and (B) high on the mood repair scale. Error bars indicate standard error of
the mean.
350 Musicae Scientiae 16(3)
(M = 16.28, SD = 8.51) was significantly greater than the recall of positive stories (M =
12.04, SD = 9.77) after exposure to negative music, t(24) = 2.49, p =.02, r2 = .21, two-tailed.
In contrast, for participants scoring low on the mood repair scale, there was no significant
interaction between music valence and narrative valence or main effect of narrative valence,
although a significant main effect of music valence was observed, F(1, 38) = 4.36, p =.04, η2
= .10, with recall scores higher following negative than positive music.
Emotional arousal theory. To test the emotional arousal theory of emotion-facilitated recall, the
total number of correct responses was collated (across valence and information types). The
music types were combined into a high emotional arousal music group (happy and fearful; n =
51) and a low emotional arousal music group (sad and calm; n = 44). An independent mea-
sures t-test revealed no difference between total recall for the high emotion arousal music group
(M = 25.33, SD = 14.91) and the low emotion arousal music group (M = 29.18, SD = 15.90),
t(93) = -1.22, p =.23, r2 = .02.
Discussion
The aim of this study was to examine the contribution of three theories, previously utilized to
explain emotion-facilitated memory, to understanding the effect of music on encoding of posi-
tively and negatively valenced information. Of the three theories examined, support was obtained
for the mood congruence theory only. The results supported the mood congruence hypothesis that
the effect of music on recall would depend on congruency between the emotional valence of the
narrative and the emotional valence music. This is consistent with previous research that has
demonstrated mood-congruent recall (e.g., Bower, Gilligan, & Monteiro, 1981; Forgas & Bower,
1987; Nasby, 1996). In contrast, neither the function hypothesis that music type would interact
with the type of information to be recalled, nor the emotional arousal hypothesis that high arousal
music would yield higher recall than low arousal music, was supported in this context.
Moderation by trait mood repair
Interestingly, the mood congruence effect was asymmetrical. While participants recalled sig-
nificantly more positive information than negative information following exposure to positive
music, the finding that participants recalled more negative information than positive informa-
tion following exposure to negative music was a non-significant trend only. This finding implies
a positive emotion bias which is consistent with previous studies in which mood congruence
was greater for the positive valence conditions compared to the negative valence conditions
(e.g., Fiedler et al., 2003; Gilligan & Bower, 1983; Nasby & Yando, 1982). Importantly, mood
regulation moderated this effect, with symmetrical mood congruence observed in participants
with high mood repair scores. In contrast, there was no significant mood-congruent effect in
individuals with low mood repair scores. These findings appear inconsistent with previous
findings, in which participants with high mood repair scores are more likely to maintain posi-
tive moods and repair negative moods such that mood congruence in the negative condition is
expected to be diminished. In contrast, it was expected that those low on mood repair would
demonstrate symmetry or a negative bias because the negative mood would be maintained or
enhanced.
The absence of diminished mood congruence in the negative music condition for high mood
repair individuals may, however, be explained by the type of mood repair strategy employed.
Tesoriero and Rickard 351
Negative mood repair can occur via different mood regulation strategies (Gross, 1998; Gross &
Thompson, 2007), two of which (distraction and reappraisal) correlate positively with the
TMMS (John & Gross, 2007). If a distraction strategy is employed, then an individual directs
attention away from the unpleasant information and towards pleasant information (John &
Gross, 2007; Gross & Thompson, 2007) and this strategy may account for diminished mood
congruence in the negative condition (Bower & Forgas, 2001; Rusting, 1998, 2001). However,
if a reappraisal strategy is employed, then an individual reappraises the unpleasant information
(John & Gross, 2007; Gross & Thompson, 2007), presumably processing it further, which may
account for the symmetrical mood congruence observed. This finding suggests that mood
repair moderates mood congruence and further differentiation of the mood repair strategies
could clarify this relationship.
The absence of mood congruence for participants with low mood repair scores may be attrib-
uted to an absence of mood awareness. These participants are more likely to be passive in
response to their mood and less aware of their mood compared to those high on mood repair
(Salovey et al., 1995); limited insight into one’s own mood such as this has previously been
demonstrated to result in an absence of mood congruence (e.g., Rothkopf & Blaney, 1991). This
finding suggests that mood awareness may also moderate mood congruence. Nonetheless, the
symmetrical mood congruence observed in participants with high mood repair scores provides
support for mood congruence theory.
It is of note that in the current study, the experimental groups were compared rather than
groups defined by subjective ratings. While the felt emotion ratings generally indicated that the
music excerpts induced the expected emotional states, these ratings were nevertheless partly
confounded by intervening tasks. That is, participants were asked to rate the music after listen-
ing to the narrative, completing the distraction tasks, and free recall of the narrative. These
tasks are likely to have influenced the subjective ratings of the music excerpts, and therefore
were a less reliable method of categorizing participants than the experimental groups. Given
that the music excerpts had successfully induced the intended emotional states in previous
studies where subjective ratings and physiological responses were measured without interven-
ing tasks and time delay (Kreutz et al., 2008; Krumhansl, 1997; Mayer et al., 1995;
Mitterschiffthaler et al., 2007; Panksepp & Bekkedal, 1997; Pelletier, 2004), it is highly proba-
ble that the pieces generally induced the intended emotional state.
Absence of support for the function theory
No support for the function theory was obtained in the current study. This finding is incon-
sistent with the findings of Levine and Burgess (1997), which may be attributable to differ-
ences in the mood induction procedure. In particular, the instrumental music may not have
been sufficiently effective at inducing discrete emotions (Juslin & Laukka, 2004; Scherer,
2004), which, given the relatively small sample sizes, means that this experiment may have
had insufficient power to detect such small effect sizes. While the online nature of this study
required experimenter-selected pieces, a stronger effect size may also be obtained via use of
participant-selected music pieces in future research. Although the intended basic emotion
was the most prominently reported, other emotions may have been partially induced as
well, as indicated in the previous studies (Hunter, Schellenberg, & Schimmack, 2010; Kreutz
et al., 2008; Krumhansl, 1997; Mayer et al., 1995; Mitterschiffthaler et al., 2007; Panksepp
& Bekkedal, 1997). This lack of exclusive basic emotion induction may have diminished basic
emotion action tendencies, preventing any of the anticipated specific encoding. This con-
trasts with the mood induction procedure used by Levine and Burgess (1997) where group
352 Musicae Scientiae 16(3)
allocation was based exclusively on the primary emotion induced. A secondary analysis
based on the primary emotion induced could not be achieved in this study because of inad-
equate sample sizes that resulted from the naturally forming groups, but is recommended
for future research, accompanied with a more integrated measure of the emotional states
induced. Continuous measurement of physiological responses (e.g., skin conductance
response) and motor expression (e.g., facial muscle responses) during the experiment
(Grewe, Nagel, Kopiez, & Altenmuller, 2007; Krumhansl, 1997; Witvliet & Vrana, 1995)
may provide a solution to this difficulty.
Absence of support for the emotional arousal theory
Similarly, there was no support for the emotional arousal theory in the current data. This finding
appears inconsistent with previous studies (Burke et al., 1992; Christianson & Loftus, 1987;
Christianson & Loftus, 1991; Cahill & McGaugh, 1995; Cahill et al., 1994; Heuer & Reisberg,
1990; Judde & Rickard, 2010); although see also Eschrich, Münte & Altenmüller, 2008, who
found also that valence rather than arousal levels best predicted recall of music pieces), how-
ever, two factors may account for this inconsistency. First, in this study, emotional arousal was
induced prior to the information to be recalled, while in previous studies in which an arousal
effect has been demonstrated, emotional arousal was induced simultaneously by virtue of the
information itself. It is possible that emotional arousal does not influence encoding when it
precedes the information to be recalled. Second, in this study, recall was tested within approxi-
mately 5 minutes of learning the information. Although emotional arousal has previously
been found to enhance encoding when tested shortly afterwards (Burke et al., 1992;
Christianson & Loftus, 1991; Christianson & Loftus, 1987), the effect increases with time
(Levine & Edelstein, 2009; Reisberg, 2006), so it would be of interest to replicate this study with
a longer learning-test interval.
Emotion-based theoretical frameworks
The current study illuminates the inconsistent findings on the effect of music on memory.
Utilizing an emotion-based theoretical framework, the facilitatory effect of music presented
prior to learning on the recall of narratives were demonstrated to be best explained by the
congruence between the emotional valence of music and the emotional valence of the sto-
ries, particularly for participants who effectively regulate their mood. This may explain,
then, why previous research appears inconsistent, as mood congruence has typically not
been a factor in the selection of music stimuli. It is interesting that music selected for rhyth-
mic or melodic reasons was found to have no facilitatory effect on encoding in several stud-
ies (Crawford & Strapp, 1994; Furnham & Allass, 1999; Furnham & Strbac, 2002). In
contrast, music selected for emotional valence was found to have a facilitatory effect on
encoding; positive valence music compared to negative valence music has facilitated recall
(e.g., Cassidy & MacDonald, 2007; Hallam, Price, & Katsarou, 2002). Although the emo-
tional valence of the information was not specified in such a way that mood congruence
could not be assessed, the emotional valence of the music clearly influences learning. Taken
together, these findings provide direction for music therapists and teachers who might con-
sider utilizing music to facilitate encoding. It is recommended that, where possible, the use
of music be guided by an emotion-based theoretical framework with an awareness of the
congruence between the emotional valence of music and the emotional valence of the infor-
mation presented, along with an appreciation of the individual differences that participants
Tesoriero and Rickard 353
may have in their tendency to regulate their mood. Importantly, in contexts other than that
described here, for instance with music presented during or after learning, or recall tested at
longer delays, the effects of music on recall may be better explained by the function or emo-
tional arousal theory of emotion-facilitated memory.
Notes
1. There is ongoing debate in the literature regarding whether music is capable of inducing ‘real world’
emotional states (e.g., Davies, 2010; Konecni, 2008), which is beyond the scope of this paper. Nev-
ertheless, music is regarded as one of more effective means of inducing ‘authentic’ emotions (Eich et
al., 2007), and the reader is referred to a review of the mechanisms by which music elicits emotions
by Juslin and Vastfjall (2008) and a comparison of the efficacy of emotion models in explaining
music-induced emotional responses by Vuoskoski and Eerola (2011).
2. While the fearful music excerpt was approximately one minute shorter than the others, this excerpt
was selected because it successfully induced fear. This criterion was considered more important than
the duration of the piece given that the time at which the emotion is induced varies widely across
individuals and pieces even when duration is equivalent.
3. While there are no established cut-off points for high and low categories in the TMMS scales, the
median split occurred at a mood repair score of 22.00, which was consistent with both the sample
mean (M = 21.49, SD = 4.21) and previous research. For instance, the mean mood repair score in
an Australian normative sample was 23.20 (SD = 4.30) (Palmer, Gignac, Bates, & Stough, 2003),
while a sample of first year Psychology students from an Australian university yielded a mean mood
repair score of 20.53 (SD = 5.00) (Davies, Stankov, & Roberts, 1998). Given the consistency with
the sample mean and previous normative data, categorization into high and low at a cut-off of 22.00
was considered representative.
References
Blaney, P. H. (1986). Affect and memory: A review. Psychological Bulletin, 99(2), 229–246.
Bower, G. H. (1981). Mood and memory. American Psychologist, 36(2), 129–148.
Bower, G. H. (1983). Af fect and cognition. Philosophical Transactions of the Royal Society of London Series
B-Biological Sciences, 302(1110), 387–402.
Bower, G. H. (1992). How might emotions affect learning? In S. Christianson (Ed.), The handbook of emo-
tion and memory: Research and theory (pp. 3–32). NJ, US: Lawrence Erlbaum Associates, Inc.
Bower, G. H., & Forgas, J. P. (2000). Affect, memory, and social cognition. In E. Eich, J. F. Kihlstrom, G. H.
Bower, J. P. Forgas & P. M. Niedenthal (Eds.), Cognition and emotion (pp. 87–168). New York, NY: Oxford
University Press.
Bower, G. H., & Forgas, J. P. (2001). Mood and social memory. In J. P. Forgas (Ed.), Handbook of affect and
social cognition (pp. 95–120). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
Bower, G. H., Gilligan, S. G., & Monteiro, K. P. (1981). Selectivity of learning caused by affective states.
Journal of Experimental Psychology: General, 110(4), 451–473.
Burke, A., Heuer, F., & Reisberg, D. (1992). Remembering emotional events. Memory & Cognition, 20(3),
277–290.
Cahill, L., & McGaugh, J. L. (1995). A novel demonstration of enhanced memory associated with emo-
tional arousal. Consciousness and Cognition: An International Journal, 4(4), 410–421.
Cahill, L., & McGaugh, J. L. (1998). Mechanisms of emotional arousal and lasting declarative memory.
Trends in Neurosciences, 21(7), 294–299.
Cahill, L., Prins, B., Weber, M., & McGaugh, J. L. (1994). !b-Adrenergic activation and memory for emo-
tional events. Nature, 371(6499), 702–704.
Cassidy, G., & MacDonald, R. A. (2007). The effect of background music and background noise on the
task performance of introverts and extraverts. Psychology of Music, 35(3), 517–537.
Christianson, S.-A., & Loftus, E. F. (1987). Memory for traumatic events. Applied Cognitive Psychology,
1(4), 225–239.
354 Musicae Scientiae 16(3)
Christianson, S.-A., & Loftus, E. F. (1991). Remembering emotional events: The fate of detailed informa-
tion. Cognition & Emotion, 5(2), 81–108.
Crawford, H. J., & Strapp, C. M. (1994). Effects of vocal and instrumental music on visuospatial and verbal
performance as moderated by studying preference and personality. Personality and Individual Differ-
ences, 16(2), 237–245.
Davies, S. (2010). Emotions expressed and aroused by music: Philosophical perspectives. In P. N. Juslin & J.
A. Sloboda (Eds.), Handbook of music and emotion: Theory, research, applications (pp. 15–43). New York,
NY: Oxford University Press.
Davies, M., Stankov, L., & Roberts, R. D. (1998). Emotional intelligence: In search of an elusive construct.
Journal of Personality and Social Psychology, 75(4), 989–1015.
Eich, E., & Forgas, J. P. (2003). Mood, cognition, and memory. In I. B. Weiner. (Ed.), Handbook of psychology:
Experimental psychology, Vol. 4. (pp. 61–83). Hoboken, NJ: John Wiley & Sons Inc.
Eich, E., Ng, J. T. W., Macaulay, D., Percy, A. D., & Grebneva, I. (2007). Combining music with thought
to change mood. In J. A. Coan & J. J. B. Allen (Eds.), Handbook of emotion elicitation and assessment
(pp. 124–136). New York, NY: Oxford University Press.
Ellis, H. C., & Moore, B. A. (1999). Mood and memory. In T. Dalgleish & M. J. Power (Eds.), Handbook of
cognition and emotion (pp. 193–210). New York, NY, US: John Wiley & Sons Ltd.
Eschrich, S., Münte, T. F., & Altenmüller, E. O. (2008). Unforgettable film music: The role of emotion in
episodic long-term memory for music. BMS Neuroscience, 9, 48.
Fiedler, K. (2001). Affective influences on social information processing. In J. P. Forgas (Ed.), Handbook of
affect and social cognition (pp. 163–185). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.
Fiedler, K., Nickel, S., Asbeck, J., & Pagel, U. (2003). Mood and the generation effect. Cognition & Emotion,
17(4), 585–608.
Forgas, J. P. (1992). Mood and the perception of unusual people: Affective asymmetry in memory and
social judgments. European Journal of Social Psychology, 22(6), 531–547.
Forgas, J. P. (1995). Mood and judgment: The affect infusion model (AIM). Psychological Bulletin, 117(1),
39–66.
Forgas, J. P. (1999). Network theories and beyond. In T. Dalgleish & M. J. Power (Eds.), Handbook of cogni-
tion and emotion (pp. 591–611). New York, NY: John Wiley & Sons Ltd.
Forgas, J. P., & Bower, G. H. (1987). Mood ef fects on person-perception judgments. Journal of Personality
and Social Psychology, 53, 53–60.
Fredrickson, B. L. (2001). The role of positive emotions in positive psychology: The broaden-and-build
theory of positive emotions. American Psychologist, 56(3), 218–226.
Furnham, A., & Allass, K. (1999). The influence of musical distraction of varying complexity on the cog-
nitive performance of extroverts and introverts. European Journal of Personality, 13(1), 27–38.
Furnham, A., & Strbac, L. (2002). Music is as distracting as noise: The differential distraction of back-
ground music and noise on the cognitive test performance of introverts and extraverts. Ergonomics,
45(3), 203–217.
Gayle, M. C. (1997). Mood-congruency in recall: The potential effect of arousal. Journal of Social Behavior
& Personality, 12(2), 471–480.
Gilligan, S. G., & Bower, G. H. (1983). Reminding and mood-congruent memory. Bulletin of the Psycho-
nomic Society, 21(6), 431–434.
Grewe, O., Nagel, F., Kopiez, R., & Altenmuller, E. (2007). Emotions over time: Synchronicity and devel-
opment of subjective, physiological, and facial affective reactions to music. Emotion, 7(4), 774–788.
Gross, J. J. (1998). The emerging field of emotion regulation: An integrative review. Review of General
Psychology, 2(3), 271–299.
Gross, J. J., & Thompson, R. A. (2007). Emotion regulation: Conceptual foundations. In J. J. Gross (Ed.),
Handbook of emotion regulation (pp. 3–24). New York, NY: Guilford Press.
Hallam, S., Price, J., & Katsarou, G. (2002). The effects of background music on primary school pupils’
task performance. Educational Studies, 28(2), 111–122.
Heuer, F., & Reisberg, D. (1990). Vivid memories of emotional events: The accuracy of remembered minu-
tiae. Memory & Cognition, 18(5), 496–506.
Tesoriero and Rickard 355
Hunter, P. G., Schellenberg, E. G., & Schimmack, U. (2010). Feelings and perceptions of happiness and
sadness induced by music: Similarities, differences, and mixed emotions. Psychology of Aesthetics, Cre-
ativity, and the Arts, 4(1), 47–56.
Isen, A. M. (1999). Positive affect. In T. Dalgleish & M. J. Power (Eds.), Handbook of cognition and emotion
(pp. 521–539). New York, NY: John Wiley & Sons Ltd.
John, O. P., & Gross, J. J. (2007). Individual differences in emotion regulation. In J. J. Gross (Ed.), Handbook
of emotion regulation (pp. 351–372). New York, NY: Guilford Press.
Judde, S., & Rickard, N. S. (2010). The effect of post-learning presentation of music on long-term word list
retention. Neurobiology of Learning and Memory, 94, 13–20.
Juslin, P. N., & Laukka, P. (2004). Expression, perception, and induction of musical emotions: A review
and a questionnaire study of everyday listening. Journal of New Music Research, 33(3), 217–238.
Juslin, P. N., & Vastfjall, D. (2008). Emotional responses to music: The need to consider underlying mecha-
nisms. Behavioral & Brain Sciences, 31(5), 559–575.
Konecni, V. J. (2008). Does music induce emotion? A theoretical and methodological analysis. Psychology
of Aesthetics, Creativity, and the Arts, 2(2), 115–129.
Kreutz, G., Ott, U., Teichmann, D., Osawa, P., & Vaitl, D. (2008). Using music to induce emotions: Influ-
ences of musical preference and absorption. Psychology of Music, 36(1), 101–126.
Krumhansl, C. L. (1997). An exploratory study of musical emotions and psychophysiology. Canadian Jour-
nal of Experimental Psychology, 51, 336–353.
Lang, P. J., Bradley, M. M., & Cuthbert, B. N. (2005). International affective picture system (IAPS): Instruction
manual and affective ratings. Gainesville, FL, US: University of Florida.
Lazarus, R. S. (1991). Emotion and adaptation. New York, NY: Oxford University Press.
Levine, L. J., & Burgess, S. L. (1997). Beyond general arousal: Effects of specific emotions on memory.
Social Cognition, 15(3), 157–181.
Levine, L. J., & Edelstein, R. S. (2009). Emotion and memory narrowing: A review and goal-relevance
approach. Cognition & Emotion, 23(5), 833 - 875.
Levine, L. J., & Pizarro, D. A. (2004). Emotion and memory research: A grumpy overview. Social Cognition,
22(5), 530–554.
Levine, L. J., & Pizarro, D. A. (2006). Emotional valence, discrete emotions, and memory. In B. Uttl, N.
Ohta & A. L. Siegenthaler (Eds.), Memory and emotion: Interdisciplinary perspectives (pp. 37–58). Mal-
den, MA, US: Blackwell Publishing.
Mayer, J. D., Allen, J. P., & Beauregard, K. (1995). Mood inductions for four specific moods: A procedure
employing guided imagery vignettes with music. Journal of Mental Imagery, 19(1–2), 151–159.
Mitterschiffthaler, M. T., Fu, C. H. Y., Dalton, J. A., Andrew, C. M., & Williams, S. C. R. (2007). A functional
MRI study of happy and sad affective states induced by classical music. Human Brain Mapping, 28,
1150–1162.
Nasby, W. (1996). Moderators of mood-congruent encoding and judgement: Evidence that elated and
depressed moods implicate distinct processes. Cognition & Emotion, 10(4), 361–377.
Nasby, W., & Yando, R. (1982). Selective encoding and retrieval of affectively valent information: Two
cognitive consequences of children’s mood states. Journal of Personality and Social Psychology, 43(6),
1244–1253.
Palmer, B., Gignac, G., Bates, T., & Stough, C. (2003). Examining the structure of the Trait Meta-Mood
Scale. Australian Journal of Psychology, 55(3), 154–158.
Panksepp, J., & Bekkedal, M. Y. (1997). The affective cerebral consequence of music: Happy vs sad effects
on the EEG and clinical implications. International Journal of Arts Medicine, 5(1), 18–27.
Pelletier, C. L. (2004). The effect of music on decreasing arousal due to stress: A meta-analysis. Journal of
Music Therapy, 41(3), 192–214.
Reips, U.-D. (2002). Standards for Internet-based experimenting. Experimental Psychology, 49(4),
243–256.
Reisberg, D. (2006). Memory for emotional episodes: The strengths and limits of arousal-based accounts.
In B. Uttl, N. Ohta & A. L. Siegenthaler (Eds.), Memory and Emotion: Interdisciplinary Perspectives (pp.
13–36). Malden, MA, US: Blackwell Publishing.
356 Musicae Scientiae 16(3)
Rickard, N. S., Toukhsati, S. R., & Field, S. E. (2005). The effect of music on cognitive performance: Insight
from neurobiological and animal studies. Behavioral and Cognitive Neuroscience Reviews, 4(4),
235–261.
Rothkopf, J. S., & Blaney, P. H. (1991). Mood-congruent memory: The role of affective focus and gender.
Cognition and Emotion, 5(1), 53–64.
Rusting, C. L. (1998). Personality, mood, and cognitive processing of emotional information: Three con-
ceptual frameworks. Psychological Bulletin, 124(2), 165–196.
Rusting, C. L. (2001). Personality as a moderator of affective influences on cognition. In J. P. Forgas (Ed.),
Handbook of affect and social cognition (pp. 371–391). Mahwah, NJ: Lawrence Erlbaum Associates Pub-
lishers.
Salovey, P., Mayer, J. D., Goldman, S. L., Turvey, C., & Palfai, T. P. (1995). Emotional attention, clarity, and
repair: Exploring emotional intelligence using the Trait Meta-Mood Scale. In J. W. Pennebaker (Ed.),
Emotion, disclosure, and health (pp. 125–154). Washington, DC: American Psychological Association.
Scherer, K. R. (2004). Which emotions can be induced by music? What are the underlying mechanisms?
And how can we measure them? Journal of New Music Research, 33(3), 239–251.
Singer, J. A., & Salovey, P. (1988). Mood and memory: Evaluating the network theory of affect. Clinical
Psychology Review, 8(2), 211–251.
Sloboda, J. A., & Juslin, P. N. (2001). Psychological perspectives on music and emotion. In P. N. Juslin &
J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp. 71–104). New York, NY: Oxford Uni-
versity Press.
Sloboda, J. A., & O’Neill, S. A. (2001). Emotions in everyday listening to music. In P. N. Juslin &
J. A. Sloboda (Eds.), Music and emotion: Theory and research (pp. 415–429). New York, NY: Oxford
University Press.
Talarico, J. M., Berntsen, D., & Rubin, D. C. (2009). Positive emotions enhance recall of peripheral details.
Cognition & Emotion, 23(2), 380–398.
Thompson, W. F., Schellenberg, E. G., & Husain, G. (2001). Arousal, mood, and the Mozart effect. Psycho-
logical Science, 12(3), 248–251.
Vuoskoski, J. K., & Eerola, T. (2011). Measuring music-induced emotion: A comparison of emotion mod-
els, personality biases, and intensity of experiences. Musicae Scientiae, 15, 159–173.
Witvliet, C. v., & Vrana, S. R. (1995). Psychophysiological responses as indices of affective dimensions.
Psychophysiology, 32(5), 436–443.
The composition, composer, excerpt, target emotion, and the mean rating of the target emotion from the
pilot study.
Title Composer Excerpt Emotion Pilot Rating
Radetzky march Strauss 0:00–03:00 Happy 3.57 (1.55)
Adagio (G minor) Albinoni 0:00–03:00 Sad 3.71 (1.33)
Arcana for full
orchestra
Varese 0:00–01:58 Fear 3.00 (1.41)
Prelude to the
afternoon of a faun
Debussy 0:00–03:00 Calm −
Note. The rating scale was 1 (low) to 5 (high). Figures in brackets are standard deviations.
Appendix: The selected music excerpts for each target emotion