ArticleLiterature Review

Predictions and the brain: How musical sounds become rewarding

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Music has always played a central role in human culture. The question of how musical sounds can have such profound emotional and rewarding effects has been a topic of interest throughout generations. At a fundamental level, listening to music involves tracking a series of sound events over time. Because humans are experts in pattern recognition, temporal predictions are constantly generated, creating a sense of anticipation. We summarize how complex cognitive abilities and cortical processes integrate with fundamental subcortical reward and motivation systems in the brain to give rise to musical pleasure. This work builds on previous theoretical models that emphasize the role of prediction in music appreciation by integrating these ideas with recent neuroscientific evidence. Copyright © 2014 Elsevier Ltd. All rights reserved.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Zatorre (2015) ha estudiado lo que se denomina como music reward (recompensa musical), que desencadena la liberación de dopamina (DA) tras escuchar música que resulte placentera para la persona. En su laboratorio, han encontrado que el placer musical se vincula con la experiencia musical de cada individuo, habiendo una relación entre placer y familiaridad, debido a nuestro impulso constante de expectación rítmica-melódica, que da como resultado la liberación de DA, y que culmina con una experiencia placentera tras una predicción acertada o un error de predicción positivo que supere la expectativa (Salimpoor et al., 2014). Por tal motivo, pretendo agregar las dimensiones de placer y de familiaridad al modelo de evaluación emocional multidimensional para las pruebas conductuales. ...
... Estudios importantes con respecto al placer musical vinculado a la expectación de plantillas musicales, se han llevado a cabo por el investigador Robert J. Zatorre en el laboratorio BRAMS, Canadá, junto con otros colegas (ej. Salimpoor et al., 2014;Zatorre, 2015;. De acuerdo con Salimpoor et al. (2014), cuando percibimos patrones familiares de sonidos, buscamos predecir la secuencia de éstos, tratando de anticipar la dirección de la melodía, así como el ritmo y la armonía. ...
... Salimpoor et al., 2014;Zatorre, 2015;. De acuerdo con Salimpoor et al. (2014), cuando percibimos patrones familiares de sonidos, buscamos predecir la secuencia de éstos, tratando de anticipar la dirección de la melodía, así como el ritmo y la armonía. Cuando una persona escucha una melodía por primera vez, ésta será placentera si ya es familiar, o contiene elementos familiares que le permitan hacer una predicción más o menos certera, y cuando esta predicción falla, el aprendizaje de la nueva estructura se reforzará si es que superó sus expectativas (positive prediction error), o se perderá el interés si fue un resultado demasiado predecible o simple; por otro lado, si fue demasiado compleja para poder descifrar su estructura, resultará en una experiencia no placentera (Salimpoor et al., 2014;Zatorre, 2015;. ...
Thesis
Full-text available
This study consists of two parts, in the first part I studied the emotional link between music and color, as well as the relationship that pleasure and familiarity have with music-evoked emotions. The study is based on Palmer et al. (2013), trying to replicate their results, but with atonal music, given the importance that mode (major or minor) has on determining musical emotion. Unlike these authors, I decided to use Russell's (1980) multidimensional emotional scale model to evaluate music and colors, although I did keep the variables of the perceptual characteristics of color that they used. My results showed that, with atonal music, that evaluated as joyful was linked to the colors evaluated as joyful (saturated, light, and yellow), and sad music, with sad colors (desaturated, dark, and blue). In this way, I was able to observe a similar behavior between atonal and tonal music. Regarding my other variables, I found that there is a positive correlation between Pleasure and Emotional Valence, as well as between Pleasure and Familiarity; although half of the music evaluated as sad (negative valence), was also evaluated as pleasant. The second part worked as a complement to the subjective evaluations of emotion and pleasure of the music. To do this, I contemplated the study of electrical brain patterns through electroencephalographic recording (EEG), using the frontal asymmetry models to distinguish valence and emotional arousal (Schmidt & Trainor, 2001), as well as the increasing frontal theta rhythm for pleasurable stimuli (Chabin et al., 2020). There was an increase in right frontal theta correlated with unpleasant and negative valence stimuli, as well as an increase in left frontal theta with pleasant and positive valence music (Rogenmoser et al., 2016). Finally, in agreement with the results of Schmidt and Trainor (2001), high arousal stimuli (joyful music) were associated with an increase in frontal activity (alpha suppression).
... The use of music as a coping strategy has been attributed to the sense of pleasure that arises when humans synchronize to external rhythms (Dunbar, 2012;Koelsch, 2014;Launay et al., 2016;Salimpoor et al., 2015;Thaut et al., 2015;Trost et al., 2014;Vuust & Kringelbach, 2010), which in turn may be linked to anxiolytic and elating effects mediated by the release and circulation of dopamine, oxytocin, and endorphins (as detailed in section 3.4). Purportedly, these acted as reinforcers for the preservation and elaboration of rhythmic synchronization into rituals and, ultimately, into that which today is called music and dance (Brown, 2000a;Dissanayake, 2006Dissanayake, , 2009aMithen, 2005). ...
... It has long been theorized that dopamine is involved in mediating the processing and production of musical rhythm and in the feeling of pleasure elicited by engaging with music (Ferreri et al., 2019;Salimpoor et al., 2015). Dopamine is generally known as the main neurotransmitter involved in reward and motivation processing, although distinct dopaminergic pathways are also implicated in learning, executive function, motor function, and neuroendocrine control (Alcaro et al., 2007). ...
... Dopamine is generally known as the main neurotransmitter involved in reward and motivation processing, although distinct dopaminergic pathways are also implicated in learning, executive function, motor function, and neuroendocrine control (Alcaro et al., 2007). Several cognitive computations that are dopamine dependent have been proposed to account for the role of dopamine in musical behaviors, such as expectations regarding rhythmic structure and the violation of these expectations (Salimpoor et al., 2015;Vuust & Kringelbach, 2010;Zatorre & Salimpoor, 2013), associative or episodic memory (Janata, 2009;Panksepp & Bernatzky, 2002), and temporal processing in the millisecond range (e.g., Merchant et al., 2013). Starting with the seminal study of Blood and Zatorre (2001), brain imaging research has repeatedly established increased activity in dopamine-rich areas, such as the striatum, when listening to pleasurable music (for reviews, see Koelsch, 2014;Zatorre, 2015). ...
Article
There has recently been a growing interest in investigating rhythm cognition and behavior in nonhuman animals as a way of tracking the evolutionary origins of human musicality – i.e., the ability to perceive, enjoy and produce music. During the last two decades, there has been an explosion of theoretical proposals aimed at explaining why and how humans have evolved into musical beings, and the empirical comparative research has also gained momentum. In this paper, we focus on the rhythmic component of musicality, and review functional and mechanistic theoretical proposals concerning putative prerequisites for perceiving and producing rhythmic structures similar to those encountered in music. For each theoretical proposal we also review supporting and contradictory empirical findings. To acknowledge that the evolutionary study of musicality requires an interdisciplinary approach, our review strives to cover perspectives and findings from as many disciplines as possible. We conclude with a research agenda that highlights relevant, yet thus far neglected topics in the comparative and evolutionary study of rhythm cognition. Specifically, we call for a widened research focus that will include additional rhythmic abilities besides entrainment, additional channels of perception and production besides the auditory and vocal ones, and a systematic focus on the functional contexts in which rhythmic signals spontaneously occur. With this expanded focus, and drawing from systematic observation and experimentation anchored in multiple disciplines, animal research is bound to generate many important insights into the adaptive pressures that forged the component abilities of human rhythm cognition and their (socio)cognitive and (neuro)biological underpinnings.
... Through exposure to a particular music style, an individual implicitly learns the laws of rhythm, melody, harmony, and other aspects of sound organization, forming implicit expectations about how these features are supposed to develop through auditory experience [1]. If these expectations are met, the individual experiences satisfaction while listening to music and vice versaif the expectations are violated, the listener is not satisfied with the listening experience [2]. The aim of this paper was to investigate how implicit musical knowledge influences satisfaction during listening to expected and unexpected harmonic musical sequences and how this interacts with the participant's music education and the overall pitch height of the sequences. ...
... With the continuous exposure to western music, individuals implicitly internalize the structure of the typical chord progression to the extent that the typical form of the harmonic syntax in compositions is perceived as expected and satisfying, whereas the violations of the syntax rules are perceived as unexpected and unsatisfying [2,4]. Loui and Wessel [5] have demonstrated this effect for both musicians and non-musicians. ...
... [7,8,15]). Our study confirms that exposure to a particular type of music makes individuals internalize the music style and learn the rules of the harmony implicitly [2]. ...
Article
Full-text available
Music is an integral part of our everyday lives. Through continuous exposure to a particular music style, an individual implicitly learns the laws of music, including the typical progression of chords that accompany the leading melody. Previous research has shown that the typical chord order in compositions is perceived as expected and satisfying, whereas the violations of the typical chord progressions are perceived as unexpected and unsatisfying. In this paper, we investigated how implicit musical knowledge influences satisfaction during listening to expected and unexpected chord progressions by taking into account the participant's music education and the overall pitch height of the chordal sequences. Ninety-seven participants (43 musicians and 54 non-musicians) took part in the experiment. They were asked to rate the degree of their satisfaction during listening to expected and unexpected chord progressions, either during the high-pitch or low-pitch height conditions. The results showed that the participants were more satisfied with expected than unexpected chord progressions, confirming previous findings on the role of implicit learning of rules of harmony. Although results did not reveal an effect of music education during listening to expected chord progressions, musicians evaluated unexpected progressions as less satisfying than non-musicians, suggesting that musicians' are more susceptible to violations of typical chord order. Finally, the results have shown that the difference in satisfaction between expected and unexpected progressions was larger in high-pitch vs. low-pitch condition, suggesting that under low-pitch condition, chord progressions were more difficult to discriminate, confirming the theory of low-interval limit.
... Ils impliqueraient des aires cérébrales et des fonctions relativement préservées ou seulement partiellement atteintes par la maladie. Ainsi, le traitement des émotions musicales, sous la dépendance d'un large réseau cérébral impliquant notamment le circuit cortico-striatal de la récompense et le système dopaminergique Salimpoor, Zald, Zatorre, Dagher, & McIntosh, 2015;Mas-Herrero, Dagher, & Zatorre, 2018;Ferreri et al., 2019) serait, du moins en partie, épargné dans cette pathologie. Même si l'altération du circuit dopaminergique a été décrit dans la maladie d'Alzheimer (Gibb, Mountjoy, Mann, & Lees, 1989), de nombreuses observations cliniques suggèrent que les personnes atteintes de la maladie d'Alzheimer continuent de montrer du plaisir lors des interventions musicales . ...
... Ce plaisir pourrait s'expliquer par le couplage audio-moteur entre le mouvement et le beat sur base des prédictions temporelles (lorsqu'elles sont correctes) de l'apparition du stimulus (Salimpoor, Benovoy, Larcher, Dagher, & Zatorre, 2011). Ces attentes temporelles activeraient les circuits cérébraux de la récompense modulant l'état émotionnel et la motivation (Salimpoor et al., , 2015Mas-Herrero et al., 2018;Ferreri et al., 2019). Etant donné l'importance du couplage audiomoteur, le type de mouvements utilisés en réponse à la musique pourrait influencer le plaisir de bouger au rythme de la musique. ...
... Music played at a fast tempo is often perceived as more cheerful and stimulating than music played at a slow tempo (Husain, Thompson, & Schellenberg, 2002;Khalfa, Roy, Rainville, Dalla Bella, & Peretz, 2008). Listening to music also provides pleasure, as many studies have shown (Blood & Zatorre, 2001;Salimpoor, Benovoy, Larcher, Dagher, & Zatorre, 2011;Salimpoor, Zald, Zatorre, Dagher, & McIntosh, 2015;Zatorre, 2015). This same sensation of pleasure can be found in dance (Bernardi, Bellemare-Pepin, & Peretz, 2017), which requires SMS based on the coupling between the auditory and motor systems (Janata, Tomic, & Haberman, 2012). ...
Thesis
Dans les interventions musicales réalisées auprès de personnes atteintes de la maladie d’Alzheimer ou de maladies apparentées, il est fréquemment demandé aux participants de bouger au rythme de la musique. La synchronisation au rythme musical, particulièrement en groupe, implique des réponses à différents niveaux (moteur, rythmique, social et émotionnel) et pourrait procurer du plaisir ainsi que renforcer les liens sociaux des patients et de leur entourage. Cependant, la synchronisation au rythme de la musique et le lien qui pourrait exister entre ces différents niveaux de la réponse à cette activité sont peu connus dans la maladie d’Alzheimer. L’objectif de cette thèse est d’examiner les différents aspects du comportement des personnes avec une maladie d’Alzheimer (ou maladies apparentées) et des participants avec un vieillissement physiologique ‘normal’ au cours d’une activité de synchronisation au rythme musical réalisée en action conjointe avec un musicien. L’approche préconisée dans ce travail se base sur une méthode pluridisciplinaire incluant les sciences du mouvement, la psychologie sociale et la neuropsychologie. En premier lieu, nous avons étudié l’effet du contexte social et de la musique (et de ses caractéristiques temporelles) sur les performances de synchronisation et sur l’engagement social, émotionnel, rythmique et moteur de personnes atteintes de la maladie d’Alzheimer dans cette activité (étude 1 chapitre 4 et 5). Les résultats ont montré que la présence physique d’une chanteuse réalisant la tâche de synchronisation avec le participant modulait différemment les performances de synchronisation et la qualité de la relation sociale et émotionnelle par comparaison à un enregistrement audio-visuel de cette chanteuse. Cet effet du contexte social était d’ailleurs plus important en réponse à la musique qu’au métronome et était modulé par le tempo et la métrique. De plus, nous avons trouvé que la musique augmentait l’engagement rythmique des participants par comparaison au métronome. Ensuite, nous avons comparé les réponses à la tâche de synchronisation dans le vieillissement pathologique et physiologique (étude 2 chapitre 6 et 7). Les résultats ont révélé que les performances de synchronisation ne différaient pas entre les deux groupes suggérant une préservation du couplage audio-moteur dans la maladie d’Alzheimer à travers cette tâche. Bien que la maladie réduisait l’engagement moteur, social et émotionnel en réponse à la musique par comparaison au vieillissement physiologique, un effet du contexte social était observé sur le comportement dans les deux groupes. Enfin, nous avons comparé les groupes de participants atteints de la maladie d’Alzheimer entre les deux études montrant que la sévérité de la maladie pouvait altérer la synchronisation et l’engagement dans l’activité (chapitre 8). En conclusion, ce travail de thèse a mis en évidence que le couplage audio-moteur est en partie préservé chez les personnes atteintes de la maladie d’Alzheimer et que l’action conjointe avec un partenaire module la qualité de la relation sociale ainsi que l’engagement à la musique. Les connaissances théoriques acquises par ce travail permettent de mieux comprendre l’évolution des comportements en réponse à la musique dans la maladie d’Alzheimer. La méthode mise au point par cette thèse offre ainsi l’opportunité d’évaluer les bénéfices thérapeutiques des interventions musicales à différents niveaux sur le comportement des personnes avec une maladie d’Alzheimer. De telles perspectives permettraient d’améliorer la prise en charge de ces personnes et de leurs aidants.
... First of all, ML is a sensory experience that contains temporal encoded information, which demands an auditory-motor interaction (auditory, motor, premotor areas [24,25] and the limbic system [26,27]). ML requires the analysis of simultaneous and sequential sounds (harmony and melody), which entails a dialogue between superior temporal cortex and inferior frontal lobe [28][29][30][31] and mainly involves temporal poles [30,[32][33][34][35]. Additionally, ML can elicit emotions, which has been described to be dissociated to the enjoyment of music [36], implicating the reward circuit [37][38][39][40][41][42]. Further regions have also been reported to be associated to music-evoked emotions, such as: Rolandic operculum [37], ventral parietal and dorso-lateral prefrontal cortex [41], inferior temporal gyri [43] and medial prefrontal cortex [44]. ...
... Occipital areas described to be involved in visual imagery and constructs were also significant for both contrasts: lingual [13,54,66], cuneus [4,53], calcarine [11,67] and fusiform [8,13,14]. We was also found activation in mostly reported ML areas: superior temporal gyri [28,34,68], right supplementary motor areas [25] (left supplementary motor area was also involved in Arith because participants were asked to push a button in each answer), left superior and middle frontal gyri [45,54], bilateral medial frontal gyri [36,69], left orbitofrontal [28][29][30], middle temporal gyri [35], temporal poles [30,[32][33][34][35], insular cortex [13,37,39,55], Rolandic operculum [37], inferior frontal triangularis [51,52] and middle cingulate which was reported to have a role in music recognition [47]. In reference to emotion, as expected, both contrasts activated the amygdala [16,21,22,39,42], other areas belonging to the reward system and the medial prefrontal cortex associated to musical emotion [36,44]. ...
... We was also found activation in mostly reported ML areas: superior temporal gyri [28,34,68], right supplementary motor areas [25] (left supplementary motor area was also involved in Arith because participants were asked to push a button in each answer), left superior and middle frontal gyri [45,54], bilateral medial frontal gyri [36,69], left orbitofrontal [28][29][30], middle temporal gyri [35], temporal poles [30,[32][33][34][35], insular cortex [13,37,39,55], Rolandic operculum [37], inferior frontal triangularis [51,52] and middle cingulate which was reported to have a role in music recognition [47]. In reference to emotion, as expected, both contrasts activated the amygdala [16,21,22,39,42], other areas belonging to the reward system and the medial prefrontal cortex associated to musical emotion [36,44]. Finally, primary auditory cortex was barely activated during Arith, as evidenced by the huge statistical significance of this area in LMM>Arith and UMM>Arith (P < 1. E-11, Tables C2, C3 in Supplementary Material). ...
Article
Full-text available
Most people have a soundtrack of life, a set of special musical pieces closely linked to certain biographical experiences. Autobiographical memories (AM) and music listening (ML) involve complex mental processes ruled by differentiate brain networks. The aim of the paper was to determine the way both networks interact in linked occurrences. We performed an fMRI experiment on 31 healthy participants (age: 32.4 ± 7.6, 11 men, 4 left-handers). Participants had to recall AMs prompted by music they reported to be associated with personal biographical events (LMM: linked AM-ML events). In the main control task, participants were prompted to recall emotional AMs while listening known tracks from a pool of popular music (UMM: unlinked AM-ML events). We wanted to investigate to what extent LMM network exceeded the overlap of AM and ML networks by contrasting the activation obtained in LMM versus UMM. The contrast LMM>UMM showed the areas (at P<0.05 FWE corrected at voxel level and cluster size>20): right frontal inferior operculum, frontal middle gyrus, pars triangularis of inferior frontal gyrus, occipital superior gyrus and bilateral basal ganglia (caudate, putamen and pallidum), occipital (middle and inferior), parietal (inferior and superior), precentral and cerebellum (6, 7 L, 8 and vermis 6 and 7). Complementary results were obtained from additional control tasks. Provided part of tLMM>UMM areas might not be related to ML-AM linkage, we assessed LMM brain network by an independent component analysis (ICA) on contrast images. Results from ICA suggest the existence of a cortico-ponto-cerebellar network including left precuneus, bilateral anterior cingulum, parahippocampal gyri, frontal inferior operculum, ventral anterior part of the insula, frontal medial orbital gyri, caudate nuclei, cerebellum 6 and vermis, which might rule the ML-induced retrieval of AM in closely linked AM-ML events. This topography may suggest that the pathway by which ML is linked to AM is attentional and directly related to perceptual processing, involving salience network, instead of the natural way of remembering typically associated with default mode network.
... This partly explains why, even under low surprise conditions, pleasure can be gained from musical expectations being fulfilled. Representations of musical features might be sparse and decline over time, such that upon repeated listenings new predictions and prediction errors can be generated (Salimpoor et al., 2015). Furthermore, familiar music may remain rewarding upon repeated hearings if its structure is surprising in relation to other pieces of the same genre, that is when it deviates from schema-like representations (Zatorre, personal communication;Salimpoor et al., 2015). ...
... Representations of musical features might be sparse and decline over time, such that upon repeated listenings new predictions and prediction errors can be generated (Salimpoor et al., 2015). Furthermore, familiar music may remain rewarding upon repeated hearings if its structure is surprising in relation to other pieces of the same genre, that is when it deviates from schema-like representations (Zatorre, personal communication;Salimpoor et al., 2015). Similarly, liking familiar music can even go as far as disliking variant versions of the same song. ...
Article
Full-text available
Music and spoken language share certain characteristics: both consist of sequences of acoustic elements that are combinatorically combined, and these elements partition the same continuous acoustic dimensions (frequency, formant space and duration). However, the resulting categories differ sharply: scale tones and note durations of small integer ratios appear in music, while speech uses phonemes, lexical tone, and non-isochronous durations. Why did music and language diverge into the two systems we have today, differing in these specific features? We propose a framework based on information theory and a reverse-engineering perspective, suggesting that design features of music and language are a response to their differential deployment along three different continuous dimensions. These include the familiar propositional-aesthetic (‘goal’) and repetitive-novel (‘novelty’) dimensions, and a dialogic-choric (‘interactivity’) dimension that is our focus here. Specifically, we hypothesize that music exhibits specializations enhancing coherent production by several individuals concurrently—the ‘choric’ context. In contrast, language is specialized for exchange in tightly coordinated turn-taking—‘dialogic’ contexts. We examine the evidence for our framework, both from humans and non-human animals, and conclude that many proposed design features of music and language follow naturally from their use in distinct dialogic and choric communicative contexts. Furthermore, the hybrid nature of intermediate systems like poetry, chant, or solo lament follows from their deployment in the less typical interactive context.
... 1,2 A large body of literature has demonstrated that the indescribable pleasure associated with music is modulated by striatal dopaminergic release, [3][4][5][6][7] and predictive coding is one of the main processing involved in musical pleasure and reward mechanisms. 2,[8][9][10][11][12][13][14][15] In other terms, our ability to anticipate and make predictions and expectations about music trajectory can lead us to enjoy the music more. Indeed, while it has long been predicted by theoretical considerations, 11,[16][17][18] recent work used neuroimaging techniques to demonstrate that predictions and expectancies actually play the role they were attributed in musical reward processing. ...
... 19 The IFG and STG are coactivated sharing functional interactions while listening to music and might process various musical attributes simultaneously. 2,77 Finally, while the source localization also suggests a broad activation of the cingulate gyrus for the MMN component, we acknowledge that this structure is too deep to be clearly identified with EEG source localization. Nevertheless, it can be a potential generator of MMN and is potentially involved in episodic memory processing. ...
Article
Musical pleasure is related to the capacity to predict and anticipate music. By recording early cerebral responses of 16 participants with electroencephalography during periods of silence inserted in known and unknown songs, we aimed to measure the contribution of different musical attributes in musical predictions. We investigated the mismatch between past encoded musical features and the current sensory inputs when listening to lyrics associated with vocal melody, only background instrumental material, or both attributes grouped together. When participants were listening to chords and lyrics, for known songs, the brain responses related to musical violation produced event-related potential responses around 150–200 ms that were of a larger amplitude than for chords or lyrics only. Microstate analysis also revealed that for chords and lyrics, the global field power had an increased stability and a longer duration. The source localization identified that the right superior temporal and frontal gyri and the inferior and medial frontal gyri were activated for a longer time for chords and lyrics, likely caused by the increased complexity of the stimuli.We conclude that grouped together, a broader integration and retrieval of several musical attributes at the same time recruit larger neuronal networks that lead to more accurate predictions.
... It is well-established that sounds have a major influence on the human brain and consequently, human experience (Levitin, 2006;Sacks, 2010). Sounds can reduce stress (Davis and Thaut, 1989), support learning and memory formation (Hallam et al., 2002), improve mood (Chanda and Levitin, 2013), and increase motivation (Salimpoor et al., 2015). Sounds can also do the opposite and create aversive experiences (Schreiber and Kahneman, 2000;Zald and Pardo, 2002;Kumar et al., 2012). ...
... It is possible that these sounds contain more distractors that attract attention away from other objects of attention, or that they contain types of sounds that the brain requires more resources to process (depending on familiar patterns, surprises, and more), leading to less resources available to perform other tasks. Sounds in these genres may also activate the reward system differently (Salimpoor et al., 2015;Gold et al., 2019), which can increase motivation to listen intently to the songs themselves rather than orient toward other tasks. Understanding the brain mechanisms that underlay the modified focus from these genres is beyond the scope of this current research, but the mapping found here can provide fruitful avenues for future brain imaging experiments that may be equipped to answer these questions. ...
Article
Full-text available
The goal of this study was to investigate the effect of audio listened to through headphones on subjectively reported human focus levels, and to identify through objective measures the properties that contribute most to increasing and decreasing focus in people within their regular, everyday environment. Participants ( N = 62, 18–65 years) performed various tasks on a tablet computer while listening to either no audio (silence), popular audio playlists designed to increase focus (pre-recorded music arranged in a particular sequence of songs), or engineered soundscapes that were personalized to individual listeners (digital audio composed in real-time based on input parameters such as heart rate, time of day, location, etc.). Audio stimuli were delivered to participants through headphones while their brain signals were simultaneously recorded by a portable electroencephalography headband. Participants completed four 1-h long sessions at home during which different audio played continuously in the background. Using brain-computer interface technology for brain decoding and based on an individual’s self-report of their focus, we obtained individual focus levels over time and used this data to analyze the effects of various properties of the sounds contained in the audio content. We found that while participants were working, personalized soundscapes increased their focus significantly above silence ( p = 0.008), while music playlists did not have a significant effect. For the young adult demographic (18–36 years), all audio tested was significantly better than silence at producing focus ( p = 0.001–0.009). Personalized soundscapes increased focus the most relative to silence, but playlists of pre-recorded songs also increased focus significantly during specific time intervals. Ultimately we found it is possible to accurately predict human focus levels a priori based on physical properties of audio content. We then applied this finding to compare between music genres and revealed that classical music, engineered soundscapes, and natural sounds were the best genres for increasing focus, while pop and hip-hop were the worst. These insights can enable human and artificial intelligence composers to produce increases or decreases in listener focus with high temporal (millisecond) precision. Future research will include real-time adaptation of audio for other functional objectives beyond affecting focus, such as affecting listener enjoyment, drowsiness, stress and memory.
... Several studies have suggested functional relationships between specific structural parameters of music and emotional responses (21). Diverse music arrangements can trigger specific brain waves and several studies have employed music to trigger specific patterns of brain activity in treating mental disorders, and other clinical conditions (22)(23)(24). ...
... Participants were assessed at pre-and post-intervention assessments by one psychologist blind to group allocation through a combination of psycho- (8)(9)(10)(11)(12)(13)(14)(15)(16)(17), moderate (18)(19)(20)(21)(22)(23)(24), and severe depression (≥25) indicated that the cut-off to suggest remission should be equal or lower than 7. The STAI-Y is composed by two forms of twentyfour-point Likert statements: STAI-Y1 assessing state anxiety and STAI-Y2 for trait anxiety. ...
Article
Full-text available
Background and aim: Although many mental disorders have relevant proud in neurobiological dysfunctions, most intervention approaches neglect neurophysiological features or use pharmacological intervention alone. Non-invasive Brain-Computer Interfaces (BCIs), providing natural ways of modulating mood states, can be promoted as an alternative intervention to cope with neurobiological dysfunction. Methods: A BCI prototype was proposed to feedback a person's affective state such that a closed-loop interaction between the participant's brain responses and the musical stimuli is established. It feedbacks in real-time flickering lights matching with the individual's brain rhythms undergo to auditory stimuli. A RCT was carried out on 15 individuals of both genders (mean age = 49.27 years) with anxiety and depressive spectrum disorders randomly assigned to 2 groups (experimental vs. active control). Results: Outcome measures revealed either a significant decrease in Hamilton Rating Scale for Depression (HAM-D) scores and gains in cognitive functions only for participants who undergone to the experimental treatment. Variability in HAM-D scores seems explained by the changes in beta 1, beta 2 and delta bands. Conversely, the rise in cognitive function scores appear associated with theta variations. Conclusions: Future work needs to validate the relationship proposed here between music and brain responses. Findings of the present study provided support to a range of research examining BCI brain modulation and contributes to the understanding of this technique as instruments to alternative therapies We believe that Neuro-Upper can be used as an effective new tool for investigating affective responses, and emotion regulation (www.actabiomedica.it).
... Recent data also suggest a positive effect on AS 12 . Music has the potential to enhance the effects of simple acoustic cues due to its motivating properties 13,14 and has been shown to be more effective than a metronome for audio-motor entrainment in patients with PD 15 . Sonification is a further therapeutic approach that uses acoustic stimuli to enhance motor function. ...
... Moreover, this type of feedback can be provided continuously without time limitations. The continuous provision of feedback during a motor task along with rhythmic auditory and emotional stimulation 11,13 with CuraSwing can thus be used to achieve a sustained increase of AS to a normal level in habitual walking or to enhance exercise with supra-normal amplitudes. ...
Article
Full-text available
Background: Reduction of arm swing during gait is an early and common symptom in Parkinson's disease (PD). By using the technology of a mobile phone, acceleration of arm swing can be converted into a closed-loop musical feedback (musification) to improve gait. Objectives: To assess arm swing in healthy subjects and the effects of musification on arm swing amplitude and other gait parameters in patients with PD. Methods: Gait kinematics were analyzed in 30 patients during a 320 m walk in 3 different conditions comprising (1) normal walking; (2) focused swinging of the more affected arm; and (3) with musification of arm swing provided by the iPhone application CuraSwing. The acceleration of arm swing was converted into musical feedback. Arm swing range of motion and further gait kinematics were analyzed. In addition, arm swing in patients was compared to 32 healthy subjects walking at normal, slow, and fast speeds. Results: Musification led to a large and bilateral increase of arm swing range of motion in patients. The increase was greater on the more affected side of the patient (+529.5% compared to baseline). In addition, symmetry of arm swing, sternum rotation, and stride length increased. With musical feedback patients with PD reached arm swing movements within or above the range of healthy subjects. Conclusions: Musification has an immediate effect on arm swing and other gait kinematics in PD. The results suggest that closed-loop musical feedback is an effective technique to improve walking in patients with PD.
... Z perspektywy funkcjonowania jednostki aktywność muzyczna oraz kontakt z muzyką mogły wiązać się z procesami synchronizacji różnych obszarów mózgu, funkcjami poznawczymi oraz inteligencją ruchową, mającymi istotny wpływ na szanse przeżycia ludzi pierwotnych (Podlipniak, 2011). Co więcej, muzyka działała również na sferę emocji, ponieważ pobudza ośrodki przyjemności i wyzwala dopaminę związaną z mechanizmem nagrody (Ferreri i in., 2019;Salimpoor, Zald, Zatorre, Dagher, McIntosh, 2015). Twórczość muzyczna mogła również pełnić ważną funkcję w zachowaniach społecznych, np. ...
Book
Full-text available
Książka zawiera przegląd badań na temat znaczenia twórczości dla kształtowania, utrzymywania oraz powracania do zdrowia oraz dobrostanu. Ukazuje również model wyjaśniający mechanizm „prozdrowotnego” oddziaływania twórczości, na podstawie dotychczasowych ustaleń teoretycznych oraz najnowszych wyników badań empirycznych. Dla uzyskania pełnego obrazu tego zjawiska uwzględnione zostały zarówno procesy pozytywnie oddziałujące na zdrowie, jak i czynniki ryzyka towarzyszące twórczości, które mogą być dla niego pewnym zagrożeniem. Ukazane zostały także najważniejsze kierunki rozwoju i wyzwania w zakresie badań naukowych i praktyki w obszarze związków między twórczością a funkcjonowaniem zdrowotnym.
... Somewhere in between lies what has been called positive prediction error. This happens when, unexpectedly, some of the predictions are violated throughout the musical structure, generating a pleasant surprise effect (Mas-Herrero et al., 2013Salimpoor et al., 2015;Zatorre, 2015;. ...
Article
Full-text available
In this work, we study the emotional link between music and color as well as the relationship that pleasure and familiarity have with music-evoked emotions, emphasizing on what happens with atonal music. This is important, since it is already known the role of music's mode (major or minor) on determining the perception of musical emotion for tonal music and, thus, its association with color. We decided to use Russell's (1980) multidimensional emotional scale model to assess music and colors, together with the four dimensions of the perceptual characteristics of color (Red-green, yellow-blue, saturation, and brightness) used by Palmer et al. (2013). Our results showed that atonal music shares a behavior similar to that found by Palmer et al. (2013) for tonal music, specifically, that music evaluated as joyful is linked to the colors perceived with that same emotion (saturated, light, and yellow), and sad music is linked to sad colors (desaturated, dark, and blue). Therefore, these findings reaffirm the importance of emotion on mediating the music-to-color association in both tonal and atonal music. Regarding our other variables, we found a positive correlation between pleasure and familiarity as well as between pleasure and emotional valence, although half of the music evaluated as sad (negative valence) was also evaluated as pleasant. (PsycInfo Database Record (c) 2022 APA, all rights reserved)
... The neural mechanisms that are engaged from the moment we start to listen to music to the moment we are tapping our foot in time with the beat or start dancing, rely on the human brain's ability to integrate external stimuli with internal representations, expectations, or predictions Pando-Naude et al., 2021). This continuous flow of stimulus-driven bottom-up information and top-down processes requires specialized neural processing, such as audiomotor coupling (Jäncke, 2012), a phenomenon driven by temporal predictions (Vuust et al., 2009;Schröger et al., 2015) that is associated with reward, pleasure, and other cognitive and emotional mechanisms (Koelsch, 2010(Koelsch, , 2014(Koelsch, , 2020Koelsch et al., 2013;Salimpoor et al., 2015). How accurately we can predict a rhythm and how pleasurable a rhythm is, therefore depend, on the one hand, on an individual's long-term priors, such as listening biography, cultural background, musical expertise, dance training, and general cognitive and motor abilities, and on the other hand, on the rhythm's complexity. ...
Article
Full-text available
Groove—defined as the pleasurable urge to move to a rhythm—depends on a fine-tuned interplay between predictability arising from repetitive rhythmic patterns, and surprise arising from rhythmic deviations, for example in the form of syncopation. The perfect balance between predictability and surprise is commonly found in rhythmic patterns with a moderate level of rhythmic complexity and represents the sweet spot of the groove experience. In contrast, rhythms with low or high complexity are usually associated with a weaker experience of groove because they are too boring to be engaging or too complex to be interpreted, respectively. Consequently, the relationship between rhythmic complexity and groove experience can be described by an inverted U-shaped function. We interpret this inverted U shape in light of the theory of predictive processing and provide perspectives on how rhythmic complexity and groove can help us to understand the underlying neural mechanisms linking temporal predictions, movement, and reward. A better understanding of these mechanisms can guide future approaches to improve treatments for patients with motor impairments, such as Parkinson’s disease, and to investigate prosocial aspects of interpersonal interactions that feature music, such as dancing. Finally, we present some open questions and ideas for future research.
... While exploration can be defined as "learning about the properties of an uncertain environment" (Gazzaniga et al., 2010, p. 1,065), a complementary behavior termed "exploitation" refers to a state in which an individual benefits from a familiar environment in which they know where rewards can be obtained (Kidd and Hayden, 2015). Arguably, Western tonality, with its inherent tonal and metrical hierarchy, offers a (Western) listener the opportunity for immediate exploitation (Meyer, 1956;Huron, 2006;Vuust and Frith, 2008;Rohrmeier and Koelsch, 2012;Salimpoor et al., 2015;Koelsch et al., 2019). In contrast, for atonal music, which lacks such a fundamental predictability there is no structure to be gleaned -especially when we first listen-, and accordingly no immediate exploitation to be afforded. ...
Article
Full-text available
Atonal music is often characterized by low predictability stemming from the absence of tonal or metrical hierarchies. In contrast, Western tonal music exhibits intrinsic predictability due to its hierarchical structure and therefore, offers a directly accessible predictive model to the listener. In consequence, a specific challenge of atonal music is that listeners must generate a variety of new predictive models. Listeners must not only refrain from applying available tonal models to the heard music, but they must also search for statistical regularities and build new rules that may be related to musical properties other than pitch, such as timbre or dynamics. In this article, we propose that the generation of such new predictive models and the aesthetic experience of atonal music are characterized by internal states related to exploration. This is a behavior well characterized in behavioral neuroscience as fulfilling an innate drive to reduce uncertainty but which has received little attention in empirical music research. We support our proposal with emerging evidence that the hedonic value is associated with the recognition of patterns in low-predictability sound sequences and that atonal music elicits distinct behavioral responses in listeners. We end by outlining new research avenues that might both deepen our understanding of the aesthetic experience of atonal music in particular, and reveal core qualities of the aesthetic experience in general.
... Even passive music listening is strongly rooted in motor processes in the brain (Grahn and Brett, 2007;Gordon et al., 2018). Anticipation of melodic, harmonic, and rhythmic content of a musical work engages canonical emotion, reward, and motor networks in the brain (Salimpoor et al., 2015;Vuust et al., 2022). Rhythmic components of music are acutely associated with predictive and motor processes (Koelsch et al., 2019;Proksch et al., 2020). ...
Article
Full-text available
Artificial Intelligence has shown paradigmatic success in defeating world champions in strategy games. However, the same programming tactics are not a reasonable approach to creative and ostensibly emotional artistic endeavors such as music composition. Here we review key examples of current creative music generating AIs, noting both their progress and limitations. We propose that these limitations are rooted in current AIs lack of thoroughly embodied, interoceptive processes associated with the emotional component of music perception and production. We examine some current music-generating machines that appear to be minimally addressing this issue by appealing to something akin to interoceptive processes. To conclude, we argue that a successful music-making AI requires both the generative capacities at which current AIs are constantly progressing, and thoroughly embodied, interoceptive processes which more closely resemble the processes underlying human emotions.
... This is supported by qualitative work describing the absorptive and immersive nature of the sensation of groove (Danielsen, 2006;Witek, 2017), which may inhibit analytical comparisons of the timing of onsets and movements. However, here we follow several influential theoretical accounts that highlight the importance of predictions in determining musical pleasure (Huron, 2006;Koelsch et al., 2019;Meyer, 1956;Salimpoor et al., 2015). From this perspective, groove, and other affective responses to music, result from the way in which music engages our predictive processes. ...
Article
Full-text available
THE SENSATION OF GROOVE CAN BE DEFINED AS the pleasurable urge to move to rhythmic music. When moving to the beat of a rhythm, both how well movements are synchronized to the beat, and the perceived difficulty in doing so, are associated with groove. Interestingly , when tapping to a rhythm, participants tend to overestimate their synchrony, suggesting a potential discrepancy between perceived and measured synchrony, which may impact their relative relation with groove. However, these relations, and the influence of syncopa-tion and musicianship on these relations, have yet to be tested. Therefore, we asked participants to listen to 50 drum patterns with varying rhythmic complexity and rate their sensation of groove. They then tapped to the beat of the same drum patterns and rated how well they thought their taps synchronized with the beat. Perceived synchrony showed a stronger relation with groove ratings than measured synchrony and syncopation, and this effect was strongest for medium complexity rhythms. We interpret these results in the context of meter-based temporal predictions. We propose that the certainty of these predictions determine the weight and number of movements that are perceived as synchronous and thus reflect rewarding prediction confirmations.
... Rhythmic complexity, and the related affective and behavioral responses, play important roles in recent research on music-supported movement therapies, social bonding, and neurophysiological mechanisms underlying motor timing and reward processes. Predicting how music unfolds and develops over time is a rewarding, pleasurable process [7,8] that involves neural auditory-motor interactions [9,10]. With music that is rated as high-groove, neural auditory-motor interactions are more affected than with low-groove music [4]. ...
Article
Full-text available
When listening to music, we often feel a strong desire to move our body in relation to the pulse of the rhythm. In music psychology, this desire to move is described by the term groove . Previous research suggests that the sensation of groove is strongest when a rhythm is moderately complex, i.e., when the rhythm hits the sweet spot between being too simple to be engaging and too complex to be interpretable. This means that the relationship between rhythmic complexity and the sensation of groove can be described by an inverted U-shape (Matthews 2019). Here, we recreate this inverted U-shape with a stimulus set that was reduced from 54 to only nine rhythms. Thereby, we provide an efficient toolkit for future studies to induce and measure different levels of groove sensations. Pleasure and movement induction in relation to rhythmic complexity are emerging topics in music cognition and neuroscience. Investigating the sensation of groove is important for understanding the neurophysiological mechanisms underlying motor timing and reward processes in the general population, and in patients with conditions such as Parkinson’s disease, Huntington’s disease and motor impairment after stroke. The experimental manipulation of groove also provides new approaches for research on social bonding in interpersonal movement interactions that feature music. Our brief stimulus set facilitates future research on these topics by enabling the creation of efficient and concise paradigms.
... This idea is supported by the observation that pleasurable music reduces anxiety and stress through downregulation of the autonomic nervous system (22)(23)(24), increasing dopamine and serotonin release in the striatum (12,25,26), increasing µ-opioid receptor and endorphin production (27), and recruiting reward and limbic regions to modulate motivation, learning and valuation (18,25,28). Anxiety, stress, learning, and reward play prominent roles in how we evaluate the relative importance of painful stimuli and our ability to cognitively and emotionally regulate pain (29)(30)(31)(32)(33). Furthermore, increased opioid receptor, endorphin, dopamine, and serotonin production directly interact with the descending opioidergic analgesic pathway consisting of the periaqueductal grey (PAG)rostral ventromedial medulla (RVM)-spinal cord (34)(35)(36). ...
Article
Full-text available
Pain is often viewed and studied as an isolated perception. However, cognition, emotion, salience effects, and autonomic and sensory input are all integrated to create a comprehensive experience. Music-induced analgesia has been used for thousands of years, with moderate behavioural effects on pain perception, yet the neural mechanisms remain ambiguous. The purpose of this study was to investigate the effects of music analgesia through individual ratings of pain, and changes in connectivity across a network of regions spanning the brain and brainstem that are involved in limbic, paralimbic, autonomic, cognitive, and sensory domains. This is the first study of its kind to assess the effects of music analgesia using complex network analyses in the human brain and brainstem. Functional MRI data were collected from 20 healthy men and women with concurrent presentation of noxious stimulation and music, in addition to control runs without music. Ratings of peak pain intensity and unpleasantness were collected for each run and were analysed in relation to the functional data. We found that music alters connectivity across these neural networks between regions such as the insula, thalamus, hypothalamus, amygdala and hippocampus (among others), and is impacted by individual pain sensitivity. While these differences are important for how we understand pain and analgesia, it is essential to note that these effects are variable across participants and provide moderate pain relief at best. Therefore, a therapeutic strategy involving music should use it as an adjunct to pain management in combination with healthy lifestyle changes and/or pharmaceutical intervention.
... The continuous liking ratings correlated with brain activity in the reward network, in line with earlier empirical findings and theoretical suggestions ( Bohrn et al., 2013 ;Sachs et al., 2016 ;Salimpoor et al., 2015 ;Wald, 2015 ;Wassiliwizky et al., 2015 ). Specifically, we found activation clusters in the prefrontal cortex, the anterior cingulate cortex and the thalamus, all key areas of the reward network. ...
Article
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, while song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs, while continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on processing their melodic properties in bilateral temporal auditory areas and an additional right fronto-temporal network supporting the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music processing in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties in both domains.
... En la adaptación de la figura 2 se indica cómo los mecanismos activos en este código son el reflejo, el contagio y la sincronización rítmica; es decir, aquellos que procesan los rasgos acústicos de bajo nivel: tempo, modo, articulación, ataque y timbre. Por otro lado, a través del código intrínseco se cifran emociones menos específicas que tienen que ver con la tensión de la espera de la resolución de un proceso de expectativa musical y con el placer o displacer al recibir lo esperado, o ser sorprendidos con algo mejor o peor de lo esperado (Salimpoor et al., 2013(Salimpoor et al., , 2015. Las estructuras implicadas en este último necesitan un cierto desarrollo en el tiempo, ya que se refiere a secuencias o patrones sintácticos procesados a través del mecanismo de expectativa. ...
Chapter
Full-text available
Desde una perspectiva fenomenológica, sustentada en la semiótica del aprendizaje (Cárdenas-Castillo, 2001), se analizó el proceso de enseñanza-aprendizaje en un curso de dirección orquestal. Se presentan resultados de observaciones y entrevistas realizadas durante un taller de dirección orquestal en una institución de educación superior. El análisis logrado nos acerca a la comprensión de los signos y los registros semióticos implicados en la didáctica de la dirección orquestal, lo que a su vez orienta la elección de estrategias pedagógicas que sean acordes con las intencionalidades docentes. Este trabajo se realizó en el marco del proyecto de investigación PIE21-1 “Formación de directores orquestales: Acercamiento fenomenológico a la pro- puesta pedagógica de Jorge Pérez-Gómez”, registrado ante la Dirección General de Investigación y Posgrado de la Universidad Autónoma de Aguascalientes.
... En la adaptación de la figura 2 se indica cómo los mecanismos activos en este código son el reflejo, el contagio y la sincronización rítmica; es decir, aquellos que procesan los rasgos acústicos de bajo nivel: tempo, modo, articulación, ataque y timbre. Por otro lado, a través del código intrínseco se cifran emociones menos específicas que tienen que ver con la tensión de la espera de la resolución de un proceso de expectativa musical y con el placer o displacer al recibir lo esperado, o ser sorprendidos con algo mejor o peor de lo esperado (Salimpoor et al., 2013(Salimpoor et al., , 2015. Las estructuras implicadas en este último necesitan un cierto desarrollo en el tiempo, ya que se refiere a secuencias o patrones sintácticos procesados a través del mecanismo de expectativa. ...
Chapter
Full-text available
Se entiende que cada usuario de un sistema dancístico le otorga sentido desde su presente y de acuerdo con el valor que la comunidad ha dado a sus elementos. Se requiere, entonces, de una comprensión profunda de la danza jazz, sobre todo al situarla en el contexto educativo. Este trabajo presenta una posible vía de análisis semiótico de la danza jazz que podría ser de ayuda en el proceso enseñanza-aprendizaje. En el primer apartado se expone la problematización, los conceptos y la comprensión de las autoras sobre el proceso semiótico posible en los actores involucrados en la danza jazz. En el segundo se expone una breve semblanza de la danza jazz para comprender los signos que están en juego. Posterior- mente, se presenta el análisis semiótico logrado mediante tres elementos bási- cos que actúan como representamen (signos) del objeto (danza jazz), sobre los que el interpretante (danzarín o espectador) construye sus significados: a) la técnica, b) el vestuario y la escena y c) el cuerpo del ejecutante. Se concluye con un apartado de reflexiones.
... En la adaptación de la figura 2 se indica cómo los mecanismos activos en este código son el reflejo, el contagio y la sincronización rítmica; es decir, aquellos que procesan los rasgos acústicos de bajo nivel: tempo, modo, articulación, ataque y timbre. Por otro lado, a través del código intrínseco se cifran emociones menos específicas que tienen que ver con la tensión de la espera de la resolución de un proceso de expectativa musical y con el placer o displacer al recibir lo esperado, o ser sorprendidos con algo mejor o peor de lo esperado (Salimpoor et al., 2013(Salimpoor et al., , 2015. Las estructuras implicadas en este último necesitan un cierto desarrollo en el tiempo, ya que se refiere a secuencias o patrones sintácticos procesados a través del mecanismo de expectativa. ...
Chapter
Full-text available
Primero se caracteriza el constructo de significado emocional de la música a través de un contexto histórico y de la explicación de un corpus teórico sobre las emociones musicales. Luego, se exponen los mecanismos psicológicos de respuestas emocionales a la música, el modelo de tres códigos de significado emocional (Juslin, 2013b) sobre el que se basa el presente planteamiento y las tipologías de Peirce. En la discusión se realizan críticas al modelo de Juslin que conducirán a su reformulación y a una reflexión conclusiva sobre el impacto académico de este ejercicio; lo que abona al diseño de estrategias para atender la dimensión del significado emocional de la música en tareas como la composición, la ejecución, la enseñanza, la formación de públicos, el diseño de terapias y la investigación.
... To make the rhythm more interesting to listeners and to thus induce groove sensation and the related positive affective responses, a balance between expectation and violation of the rhythm is thought to be important 12 . This is supported by the hypothesis that prediction error (deviation from predicted rhythm) is a requisite for music to activate the reward system in the brain [43][44][45] . Since the validity of our experimental design was confirmed, we proceeded to the detection of the effect of GR on cognitive function. ...
Article
Full-text available
Hearing a groove rhythm (GR), which creates the sensation of wanting to move to the music, can also create feelings of pleasure and arousal in people, and it may enhance cognitive performance, as does exercise, by stimulating the prefrontal cortex. Here, we examined the hypothesis that GR enhances executive function (EF) by acting on the left dorsolateral prefrontal cortex (l-DLPFC) while also considering individual differences in psychological responses. Fifty-one participants underwent two conditions: 3 min of listening to GR or a white-noise metronome. Before and after listening, participants performed the Stroop task and were monitored for l-DLPFC activity with functional near-infrared spectroscopy. Our results show that GR enhanced EF and l-DLPFC activity in participants who felt a greater groove sensation and a more feeling clear-headed after listening to GR. Further, these psychological responses predict the impact of GR on l-DLPFC activity and EF, suggesting that GR enhances EF via l-DLPFC activity when the psychological response to GR is enhanced.
... How can a sequence of tones generate tension and relaxation? One way is by eliciting expectations (e.g., Huron, 2006;Meyer, 1956;Salimpoor et al., 2015;Zatorre, 2018). When hearing a melody, listeners form expectations about upcoming notes or groups of notes. ...
Article
Full-text available
Research has investigated psychological processes in an attempt to explain how and why people appreciate music. Three programs of research have shed light on these processes. The first focuses on the appreciation of musical structure. The second investigates self-oriented responses to music, including music-evoked autobiographical memories, the reinforcement of a sense of self, and benefits to individual health and wellbeing. The third seeks to explain how music listeners become sensitive to the causal and contextual sources of music making, including the biomechanics of performance, knowledge of musicians and their intentions, and the cultural and historical context of music making. To date, these programs of research have been carried out with little interaction, and the third program has been omitted from most psychological enquiries into music appreciation. In this paper, we review evidence for these three forms of appreciation. The evidence reviewed acknowledges the enormous diversity in antecedents and causes of music appreciation across contexts, individuals, cultures, and historical periods. We identify the inputs and outputs of appreciation, propose processes that influence the forms that appreciation can take, and make predictions for future research. Evidence for source sensitivity is emphasized because the topic has been largely unacknowledged in previous discussions. This evidence implicates a set of unexplored processes that bring to mind causal and contextual details associated with music, and that shape our appreciation of music in important ways. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
... The other involves the processing of explicitly learned information that reflects familiarity with a specific piece of music (see Section "Amount of Surprise Over Temporally Integrated Expectations"). The superior temporal cortex is believed to accumulate templates of sound events that a person learns over time (Peretz et al., 2009;Salimpoor et al., 2015). In evidence, electrical stimulation of the superior temporal cortex provokes musical hallucinations (Penfield and Perot, 1963). ...
Article
Full-text available
Obtaining information from the world is important for survival. The brain, therefore, has special mechanisms to extract as much information as possible from sensory stimuli. Hence, given its importance, the amount of available information may underlie aesthetic values. Such information-based aesthetic values would be significant because they would compete with others to drive decision-making. In this article, we ask, “What is the evidence that amount of information support aesthetic values?” An important concept in the measurement of informational volume is entropy. Research on aesthetic values has thus used Shannon entropy to evaluate the contribution of quantity of information. We review here the concepts of information and aesthetic values, and research on the visual and auditory systems to probe whether the brain uses entropy or other relevant measures, specially, Fisher information, in aesthetic decisions. We conclude that information measures contribute to these decisions in two ways: first, the absolute quantity of information can modulate aesthetic preferences for certain sensory patterns. However, the preference for volume of information is highly individualized, with information-measures competing with organizing principles, such as rhythm and symmetry. In addition, people tend to be resistant to too much entropy, but not necessarily, high amounts of Fisher information. We show that this resistance may stem in part from the distribution of amount of information in natural sensory stimuli. Second, the measurement of entropic-like quantities over time reveal that they can modulate aesthetic decisions by varying degrees of surprise given temporally integrated expectations. We propose that amount of information underpins complex aesthetic values, possibly informing the brain on the allocation of resources or the situational appropriateness of some cognitive models.
... Several recent neural and behavioral studies provide initial support for the rewarding aspect of prediction confirmation across domains. Sensory input (in the form of musical sounds) that matches the predicted information triggers an intrinsic reward response (Salimpoor et al., 2015). Similarly, observing targets that match people's stereotypical predictions is also rewarding (Reggev et al., 2021). ...
Article
Full-text available
The predictive processing framework posits that people continuously use predictive principles when interacting with, learning from, and interpreting their surroundings. Here, we suggest that the same framework may help explain how people process self-relevant knowledge and maintain a stable and positive self-concept. Specifically, we recast two prominent self-relevant motivations, self-verification and self-enhancement, in predictive processing (PP) terms. We suggest that these self-relevant motivations interact with the self-concept (i.e., priors) to create strong predictions. These predictions, in turn, influence how people interpret information about themselves. In particular, we argue that these strong self-relevant predictions dictate how prediction error, the deviation from the original prediction, is processed. In contrast to many implementations of the PP framework, we suggest that predictions and priors emanating from stable constructs (such as the self-concept) cultivate belief-maintaining, rather than belief-updating, dynamics. Based on recent findings, we also postulate that evidence supporting a predicted model of the self (or interpreted as such) triggers subjective reward responses, potentially reinforcing existing beliefs. Characterizing the role of rewards in self-belief maintenance and reframing self-relevant motivations and rewards in predictive processing terms offers novel insights into how the self is maintained in neurotypical adults, as well as in pathological populations, potentially pointing to therapeutic implications.
... Reasonably, this may impact on musical expectation and predictability, and in turn, on the rewarding experience. In addition, also listener's musical preferences, which are seen as a major constituent of the aesthetic experience of music (Nieminen et al., 2011), are strongly modulated by previous exposure (Salimpoor et al., 2015), and stabilize only in early adulthood (LeBlanc et al., 1996). It might be assumed therefore, that the experience of musical reward in childhood is only partially related to familiarity, expectation, and predictability, as seen among adults. ...
Article
Music is one of the most pleasurable human experiences. However, the determinants of the variation in individual sensitivity to musical reward are not yet fully unraveled. Empathy has been identified as a determinant of musical affect, including consciously experiencing pleasure from listening to sad music. Additionally, higher musical expertise may enhance pleasurable responses to music, whereas aging decreases individual sensitivity to musical pleasure. We conducted a study to investigate the contribution of empathy and musical abilities on musical pleasure, measured by Interpersonal Reactivity Index, Musical Ear Test, and Barcelona Musical Reward Questionnaire, respectively. To this purpose, we performed a developmental comparison between 48 children (9–11 years old) and 42 adults (18–32 years old). Our findings suggest that individual sensitivity to musical reward is positively correlated with empathy trait in both adults and children, but not with musical abilities. However, when inserted in a regression model including empathy, musical abilities are also predictive of musical reward, but only among adults. These results show that empathy plays a crucial role in determining the individual sensitivity to music reward, whereas musical abilities are less influential. More broadly, this study contributes to shed light on the determinants of the emotional responses to music affect.
... Second, it potentially offers a different interpretation of studies of reward system activation in response to music listening (Salimpoor et al., 2011;Salimpoor & Zatorre, 2013). Results from such studies are often interpreted within the framework of musical expectation theory (Huron, 2006;Meyer, 1956), which focuses on music's ability to generate salient expectations, combined with the framework of prediction error theory, which predicts positive reward to be generated when outcomes are better than expected, though what that means for music is unclear (Cheung et al., 2019;Salimpoor et al., 2015;Schultz, 2017). However, this interpretation fails to convincingly account for the clear preference of familiar music by most listeners (Madison & Schiölde, 2017;Pereira et al., 2011), since fully predicted outcomes are not meant to trigger a reward. ...
Article
Theories of music evolution rely on our understanding of what music is. Here, I argue that music is best conceptualized as an interactive technology, and propose a coevolutionary framework for its emergence. I present two basic models of attachment formation through behavioral alignment applicable to all forms of affiliative interaction and argue that the most critical distinguishing feature of music is entrained temporal coordination. Music's unique interactive strategy invites active participation and allows interactions to last longer, include more participants, and unify emotional states more effectively. Regarding its evolution, I propose that music, like language, evolved in a process of collective invention followed by genetic accommodation. I provide an outline of the initial evolutionary process which led to the emergence of music, centered on four key features: technology, shared intentionality, extended kinship, and multilevel society. Implications of this framework on music evolution, psychology, cross-species and cross-cultural research are discussed.
... Listening to music can be a rewarding, pleasurable experience, and several studies have investigated the neural and physiological bases of music-induced pleasure and reward (for reviews, see Belfi & Loui, 2020;Salimpoor et al., 2015;Zatorre, 2015;Zatorre & Salimpoor, 2013). One commonly used correlate of music-induced pleasure is the chills response which has been described as having goose bumps on the skin (Panskepp, 1995) or a shiver down the spine (Blood & Zatorre, 2001). ...
Book
This Element reviews literature on the physiological influences of music during perception and action. It outlines how acoustic features of music influence physiological responses during passive listening, with an emphasis on comparisons of analytical approaches. It then considers specific behavioural contexts in which physiological responses to music impact perception and performance. First, it describes physiological responses to music that evoke an emotional reaction in listeners. Second, it delineates how music influences physiology during music performance and exercise. Finally, it discusses the role of music perception in pain, focusing on medical procedures and laboratory-induced pain with infants and adults.
... In the auditory domain, an intermediate degree of unpredictability and its resolution during listening are thought to evoke musical pleasure [36] in agreement with predictive coding accounts of brain function [37,73] (for a review of possible neural correlates of musical expectations in the human brain, see [74]). We speculate that a certain degree of unpredictability in the distribution of basic structural (perceptual) features is one of the hallmarks of aesthetically appreciated stimuli. ...
Article
Full-text available
Computational textual aesthetics aims at studying observable differences between aesthetic categories of text. We use Approximate Entropy to measure the (un)predictability in two aesthetic text categories, i.e., canonical fiction (‘classics’) and non-canonical fiction (with lower prestige). Approximate Entropy is determined for series derived from sentence-length values and the distribution of part-of-speech-tags in windows of texts. For comparison, we also include a sample of non-fictional texts. Moreover, we use Shannon Entropy to estimate degrees of (un)predictability due to frequency distributions in the entire text. Our results show that the Approximate Entropy values can better differentiate canonical from non-canonical texts compared with Shannon Entropy, which is not true for the classification of fictional vs. expository prose. Canonical and non-canonical texts thus differ in sequential structure, while inter-genre differences are a matter of the overall distribution of local frequencies. We conclude that canonical fictional texts exhibit a higher degree of (sequential) unpredictability compared with non-canonical texts, corresponding to the popular assumption that they are more ‘demanding’ and ‘richer’. In using Approximate Entropy, we propose a new method for text classification in the context of computational textual aesthetics.
... The periodic character of the music (implied in the expressive timing of pulses and beats) fits well with the ability of the human motor system to act in concert with attention sharpened by periodicity in signals (Morillon et al., 2015), as music is often designed to dance and move upon (Wang, 2015). On top of that, a major asset of using music as feedback source concerns its ability to motivate and provide pleasure (Salimpoor et al., 2015). This feature may be related to its rhythmicity (Wang, 2015), which is valuable in contexts that often involve strenuous physical activity, boredom, and fatigue (Fritz et al., 2013). ...
Thesis
Running is a gross-motor skill and a popular physical activity, though it comes with a risk of injury. Gait retraining is performed with the intent on managing the risk of running injury. The peak tibial acceleration may be linked with running injuries and is suitable as input for biofeedback. So far, retraining programs with the use of biofeedback on peak tibial acceleration have been bound to a treadmill. Therefore, the objective of this doctoral thesis was to evaluate the effectiveness of a novel music-based biofeedback system on peak tibial acceleration in the context of gait retraining in a training environment. This system is wearable and has lightweight sensors to attach to the lower leg. The sensor first records the tibial acceleration. Then, a processing unit detects the acceleration spike for direct auditory biofeedback. Studies 1 to 5 covered the measurement of peak tibial accelerations, the design of the music-based feedback, and the effectiveness evaluation of the biofeedback system for impact reduction in a training center. In study 1 the peak tibial acceleration of a group of distance runners was reliable in the same test and repeatable in a re-test. The peak tibial accelerations increased with running speed and were correlated with the maximum vertical loading rate of the ground reaction force, which is an impact characteristic derived in the biomechanics laboratory. The developed peak detection algorithm identified the peak tibial acceleration in real-time. The music-based feedback was developed in study 2. The music was superimposed with perceptible pink noise. The noise intensity could be linked to a biological parameter such as the peak acceleration tibial. The tempo of the music synchronized with the cadence of the runner to motivate the runner and allowed for a user-induced change in cadence in response to the biofeedback. Studies 3 to 5 examined the effectiveness of music-based biofeedback on the peak tibial peak in a training environment. We demonstrated that smaller peak values are achievable with the aid of the validated biofeedback system. In study 3, ten runners with high peak tibial acceleration were subjected to biofeedback on the momentary peak tibial acceleration. The group was able to reduce their peak tibial acceleration by 27% or 3 g in the biofeedback condition. Study 4 evaluated the initial learning effect within a single session at ~11.5 km/h. The main change in peak acceleration occurred after approximately 8 minutes of biofeedback. However, there was substantial between-subject variation in time which ranged from 4 to 1329 gait cycles. Study 5 confirmed the effectiveness of the biofeedback in a quasi-randomized study with control group. The experimental group received the biofeedback in a 3-week retraining program comprising of biofeedback faded in time. The control group received tempo-synchronized music as placebo. A running speed of approximately 10 km/h was maintained session after session via speed feedback. All runners completed the running program consisting of 6 sessions. The peak acceleration decreased by 26% or 3 g in the experimental group. The smaller peak values in studies 3-5 must have resulted from a movement alteration, although there was no significant change in running cadence at the group level. Studies 6 to 9 give insight into possible strategies for low(er) peak tibial acceleration in level running. In study 6, we discovered that peak tibial accelerations depend on the manner of heel striking. Specifically, a more pronounced heel landing was correlated with smaller axial (1D) and resultant (3D) peak tibial accelerations. The multicenter results of study 7 showed greater resultant peak acceleration in non-rearfoot strikes compared with heel strikes. This greater acceleration was due to an abrupt horizontal deceleration of the lower leg. In study 8, we described and compared the running mechanics of a successful long-distance runner with low (impact) load and a high load capacity. A pronounced heel strike in conjunction with long stance and short flight phases characterized a low-impact runner who successfully completed 100 marathons in 100 days. Study 9 documented adaptations post-biofeedback in a lab center. There was no clear relationship between the changes in peak tibial acceleration and in running cadence, which confirmed the results of the data captured in the training center. Casuistry showed visually detectable changes in the curve of the vertical ground reaction force. A runner with high peak tibial acceleration peaks changed to a more pronounced rearfoot strike or changed to a non-rearfoot strike pattern to reduce the axial peak tibial acceleration. These results suggest the existence of different distal strategies for impact reduction elicited by biofeedback. Our experiments opened the possibility of impact reduction with the use real-time auditory biofeedback that is perceptible and motivating. Two motor strategies were discovered to run with less peak tibial acceleration. We hope these findings offer encouragement for runners, coaches and clinicians who wish to target a form of low(er) impact running. The biofeedback system effectively modified the running form and has great ecological value due to the portable hardware and energy source for outdoor usage. User-oriented biofeedback systems should become available for the consumer and the patient if proven useful for respectively injury reduction and injury management. Overall, this doctoral thesis contributed to a better understanding of impact severity in distance running and its reduction in a gait retraining context with the use of real-time music-based biofeedback.
... Recent work has pointed out that a potential reward is a strong motivator for interacting with music. Reward is associated with midbrain dopamine neurons whose activation reflects the degree of reward predictability (Hollerman & Schultz, 1998;Salimpoor, Zald, Zatorre, Dagher & McIntosh, 2015). Reward processing is clearly related to prediction processing and has dependencies related to arousal and physical effort (Fritz et al., 2013). ...
... However, the biology of these associations is not yet well understood. Based on the literature, good candidates for future studies of neural endophenotypes of musicality-language associations include aspects of development, structure, and function in the following brain areas amongst others: auditory brainstem (e.g., Skoe et al., 2015); auditory cortex morphology (e.g., Wong et al., 2008); auditory-motor system (e.g., Tierney & Kraus, 2014); motor cortex and SMA (e.g., Cannon & Patel., 2021); striatum activity involved in reward and prediction (e.g., Gold et al., 2019;Salimpoor et al., 2015); cerebellar activation (e.g., McLaughlin & Wilson, 2017); speech areas that have also been linked to music task activation (i.e. Broca's area and RH Broca homologue, e.g., Vuust et al., 2006;2011); and oscillatory networks involved in neural entrainment to speech and music, as measured with EEG or MEG (e.g., Doelling & Poeppel, 2015). ...
Preprint
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. We review and discuss findings from over seventy studies in the literature demonstrating that individual differences in musical aptitude (i.e., rhythm and tonality skills) are robustly correlated with a wide range of speech-language skills that are foundational for effective communication, including speech perception, grammatical abilities, reading-related skills, and second/foreign language learning. From this body of work we conclude that musical abilities are intertwined with speech, language, and reading development over the lifespan. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality.
... The Department of Musicology actively developed medieval studies during the first period of activity (emphasis was placed on the study of the history and development of European schools of composition, namely -on the study of local musical traditions) (Salimpoor et al., 2015;Perlovsky, 2010;Patel & Iversen, 2007). This direction was a priority in the researches of Professor A. Chybiński. ...
Article
Full-text available
The relevance of the research topic consists in the necessity of a comprehensive study of the educational and scientific activities of the departments of musicology of Krakow and Lviv universities during the first half of the twentieth century in the context of coverage of the interaction experience between Ukrainian and foreign higher education institutions so to determine in such way the features and values of the Ukrainian music science in the European scientific and educational space. The purpose of the research is to discover and to analyze the creative activity of the departments of musicology at the Krakow and Lviv Universities in the context of Jagellonism through the consideration of the principles of European musical academic education since the establishment of these institutions in 1911-1912 and till the beginning of the XXI century. The scientific novelty of the research consists in finding out the leading role of Z. Jachimecky and A. Chybi?ski in the formation and development of national musicological schools in Poland and Ukraine as inheritors of the cultural genetic code of Jagiellonism. That cultural genetic code consisted of special attention to the inter-Slavic relations within the Western-Eastern cultural heritage.
... This likely provides a mechanism through which music that is experienced as pleasing can enhance dopamine-mediated positive prediction error signaling and reinforcement learning. Thus, the association of dopamine release and NAc activation during peak musical pleasure may be a direct manifestation of this opioid-dopamine interaction (Salimpoor et al., 2015). ...
Article
Full-text available
Music is a crucial element of everyday life and plays a central role in all human cultures: it is omnipresent and is listened to and played by persons of all ages, races, and ethnic backgrounds. But music is not simply entertainment: scientific research has shown that it can influence physiological processes that enhance physical and mental wellbeing. Consequently, it can have critical adaptive functions. Studies on patients diagnosed with mental disorders have shown a visible improvement in their mental health after interventions using music as primary tool. Other studies have demonstrated the benefits of music, including improved heart rate, motor skills, brain stimulation, and immune system enhancement. Mental and physical illnesses can be costly in terms of medications and psychological care, and music can offer a less expansive addition to an individual's treatment regimen. Interventions using music offers music-based activities in both a therapeutic environment (Music therapy) with the support of a trained professional, and non-therapeutic setting, providing an atmosphere that is positive, supportive, and proactive while learning non-invasive techniques to treat symptoms associated with various disorders – and possibly modulate the immune system.
... Furthermore, frontal and parietal areas have been related to emotional control in reaction to music 9 . Fronto-temporal loops also underly working memory and PE processing in the auditory and musical domains 25,26 , functions hypothesized to be involved in music-evoked pleasantness by enabling the temporal representations and predictive dynamics necessary for music perception and its subsequent affective evaluation 5 . ...
Article
Full-text available
Music-evoked pleasantness has been extensively reported to be modulated by familiarity. Nevertheless, while the brain temporal dynamics underlying the process of giving value to music are beginning to be understood, little is known about how familiarity might modulate the oscillatory activity associated with music-evoked pleasantness. The goal of the present experiment was to study the influence of familiarity in the relation between theta phase synchronization and music-evoked pleasantness. EEG was recorded from 22 healthy participants while they were listening to both familiar and unfamiliar music and rating the experienced degree of evoked pleasantness. By exploring interactions, we found that right fronto-temporal theta synchronization was positively associated with music-evoked pleasantness when listening to unfamiliar music. On the contrary, inter-hemispheric temporo-parietal theta synchronization was positively associated with music-evoked pleasantness when listening to familiar music. These results shed some light on the possible oscillatory mechanisms underlying fronto-temporal and temporo-parietal connectivity and their relationship with music-evoked pleasantness and familiarity.
Article
We argue that music can serve as a time-sensitive lens into the interplay between instrumental and ritual stances in cultural evolution. Over various timescales, music can switch between pursuing an end goal or not, and between presenting a causal opacity that is resolvable, or not. With these fluctuations come changes in the motivational structures that drive innovation versus copying.
Article
Previous studies have evidenced how the local prediction of physical stimulus features may affect the neural processing of incoming stimuli. Less known are the effects of cognitive priors on predictive processes, and how the brain computes local versus cognitive predictions and their errors. Here, we determined the differential brain mechanisms underlying prediction errors related to high-level, cognitive priors for melody (rhythm, contour) versus low-level, local acoustic priors (tuning, timbre). We measured with magnetoencephalography the mismatch negativity (MMN) prediction error signal in 104 adults having varying levels of musical expertise. We discovered that the brain regions involved in early predictive processes for local priors were primary and secondary auditory cortex and insula, whereas cognitive brain regions such as cingulate and orbitofrontal cortices were recruited for early melodic errors in cognitive priors. The involvement of higher-level brain regions for computing early cognitive errors was enhanced in musicians, especially in cingulate cortex, inferior frontal gyrus, and supplementary motor area. Overall, the findings expand knowledge on whole-brain mechanisms of predictive processing and the related MMN generators, previously mainly confined to the auditory cortex, to a frontal network that strictly depends on the type of priors that are to be computed by the brain.
Preprint
Full-text available
Seeking exposure to unfamiliar experiences constitutes an essential aspect of the human condition, and the brain must adapt to the constantly changing environment by learning the evolving statistical patterns emerging from it. Cultures are shaped by norms and conventions and therefore novel exposure to an unfamiliar culture induces a type of learning that is often described as implicit: when exposed to a set of stimuli constrained by unspoken rules, cognitive systems must rapidly build a mental representation of the underlying grammar. Music offers a unique opportunity to investigate this implicit statistical learning, as sequences of tones forming melodies exhibit structural properties learned by listeners during short- and long-term exposure. Understanding which specific structural properties of music enhance learning in naturalistic learning conditions reveals hard-wired properties of cognitive systems while elucidating the prevalence of these features across cultural variations. Here we provide behavioral and neural evidence that the prevalence of non-uniform musical scales may be explained by their facilitating effects on melodic learning. In this study, melodies were generated using an artificial grammar with either a uniform (rare) or non-uniform (prevalent) scale. After a short exposure phase, listeners had to detect ungrammatical new melodies while their EEG responses were recorded. Listeners' performance on the task suggested that the extent of statistical learning during music listening depended on the musical scale context: non-uniform scales yielded better syntactic learning. This behavioral effect was mirrored by enhanced encoding of musical syntax in the context of non-uniform scales, which further suggests that their prevalence stems from fundamental properties of learning.
Article
Full-text available
Technologies, such as mobile devices or sets of connected sensors, provide new and engaging opportunities to devise music-based interventions. Among the different technological options, serious games offer a valuable alternative. Serious games can engage multisensory processes, creating a rich, rewarding, and motivating rehabilitation setting. Moreover, they can be targeted to specific musical features, such as pitch production or synchronization to a beat. Because serious games are typically low cost and enjoy wide access, they are inclusive tools perfectly suited for remote at-home interventions using music in various patient populations and environments. The focus of this article is in particular on the use of rhythmic serious games for training auditory-motor synchronization. After reviewing the existing rhythmic games, initial evidence from a recent proof-of-concept study in Parkinson's disease is provided. It is shown that rhythmic video games using finger tapping can be used with success as an at-home protocol, and bring about beneficial effects on motor performance in patients. The use and benefits of rhythmic serious games can extend beyond the rehabilitation of patients with movement disorders, such as to neurodevelopmental disorders, including dyslexia and autism spectrum disorder.
Article
Recent statistical studies have suggested a relationship between increased harmonic surprise and music preference. Conclusive behavioral evidence to establish this relationship is still lacking. We set out to address this gap through a behavioral study using computer-generated stimuli designed to differ only in contrastive and absolute harmonic surprise. We produced the stimuli with both experimental control and ecological validity in mind by engaging the help of studio musicians. The stimuli were rated for preference by 84 participants (44 female, 40 male) between 18 to 65 years old. Participants rated items featuring moderately increased absolute and contrastive surprise significantly higher than items with lower harmonic surprise. This effect applied only to levels of surprise within a range typically found in popular music, however. Excessive surprises did not yield an increase in preference. We discuss different mechanisms of consistency and how they may mediate the selection of neural strategies leading to preference formation. These findings provide evidence of a causal behavioral relationship between harmonic surprise and music preference.
Article
This paper presents the argument that inherent musicality in human movement is near-universal. I examine data and empirical evidence which suggest that dance, music, speech, and bipedalism are interrelated characteristics, rooted in the earliest moments of our history. The combination of proto-musical, rhythmic, tonal vocalisation and explanatory gesture has been suggested as the seminal beginning, both of dance and of language. This topic has been vigorously debated, indeed some twentieth-century studies dispute the universality of human musicality. Recent technological advances have, however, revealed data which support the case for innate, universal human musicality. I discuss possible reasons for adaptations for music, dance, and speech, and offer examples from neuroscience of our innate beat perception and entrainment ability, with consequent implications for dance as therapy and rehabilitation.
Article
Full-text available
Extensive research suggests that reinforcement learning and goal-seeking behavior are mediated by midbrain dopamine neurons. However, little is known about neural substrates of curiosity and exploratory behavior, which occur in the absence of clear goal or reward. This is despite behavioral scientists having long suggested that curiosity and exploratory behaviors are regulated by an innate drive. We refer to such behavior as information-seeking behavior and propose 1) key neural substrates and 2) the concept of environment prediction error as a framework to understand information-seeking processes. The cognitive aspect of information-seeking behavior, including the perception of salience and uncertainty, involves, in part, the pathways from the posterior hypothalamic supramammillary region to the hippocampal formation. The vigor of such behavior is modulated by the following: supramammillary glutamatergic neurons; their projections to medial septal glutamatergic neurons; and the projections of medial septal glutamatergic neurons to ventral tegmental dopaminergic neurons. Phasic responses of dopaminergic neurons are characterized as signaling potentially important stimuli rather than rewards. This paper describes how novel stimuli and uncertainty trigger seeking motivation and how these neural substrates modulate information-seeking behavior.
Thesis
Full-text available
Taking in consideration the limited time assigned to music class in Chile, and attending to student's learning needs, the knowledge of theories and research findings on cognitive processing, seems to be a significant way to enhance rhythmic musical training at Primary School. The general aim of this thesis was to suggest pedagogical guidelines for the rhythmic education of third and fourth year of Basic Education students in Chile, based on a double source of information: 1) the review of theoretical and empirical studies on cognitive processing of rhythmic information; and 2) the teaching practice of Chilean music teachers. The methodology of this research is mixed (qualitative-quantitative) and exploratory in nature, with data extraction through a survey and focus groups. According to the references consulted, the factors that facilitate rhythmic processing are: the isochronous pulse in an optimal tempo range that oscillates between 100 and 120 bpm; the meter, preferably binary; and, the temporal grouping given mainly by rhythmic patterns with internal ratios of 2:1 relationship. The results of the survey and the focus groups indicate that teachers give pulse a great level of importance. These findings converge with studies showing that isochronous pulsation is critical for rhythmic processing. In relation to tempo, the teachers use the “individual pulse” of each student and prefer slow temples in the range of 60-90 bpm. To consider individual tempo could be positive, since children would be more successful in rhythmic tasks with a tempo close to their spontaneous personal tempo. However, tempi between 60 and 90 bpm are not congruent with the optimal tempo range, 100-120 bpm. In general, teachers use meters where 4/4 predominates, followed by 3/4 and 2/4. These results converge with the relevance of the meter, especially binary, to facilitate psychological understanding of rhythm. Teachers think that patterns are important, specifically, a clear inclination towards rhythmic figures with 2:1 duration ratio was observed. These results coincide with the greater efficiency of working memory when using patterns and with the natural tendency to the 2:1 ratio. In relation to didactic strategies, body movement and language are the most used resources, both separately and linked to each other. These findings agree with the influence that corporal expression could have improving the perception and representation of musical rhythms. Also, there is an important link between rhythmic-temporal skills and reading skills in children. In the future, studies should be carried out in Primary Schools to confirm or refute the actual findings and theories. In addition, research on neurocognition should continue to be related with music education, for planning, designing and implementing study programs and methodologies with solid theoretical-cognitive bases.
Article
Attentional capture by previously reward-associated stimuli has predominantly been measured in the visual domain. Recently, behavioral studies of value-driven attention have demonstrated involuntary attentional capture by previously reward-associated sounds, emulating behavioral findings within the visual domain and suggesting a common mechanism of attentional capture by value across sensory modalities. However, the neural correlates of the modulatory role of learned value on the processing of auditory information has not been examined. Here, we conducted a neuroimaging study on human participants using a previously established behavioral paradigm that measures value-driven attention in an auditory target identification task. We replicate behavioral findings of both voluntary prioritization and involuntary attentional capture by previously reward-associated sounds. When task-relevant, the selective processing of high-value sounds is supported by reduced activation in the dorsal attention network of the visual system (FEF, intraparietal sulcus, right middle frontal gyrus), implicating cross-modal processes of biased competition. When task-irrelevant, in contrast, high-value sounds evoke elevated activation in posterior parietal cortex and are represented with greater fidelity in the auditory cortex. Our findings reveal two distinct mechanisms of prioritizing reward-related auditory signals, with voluntary and involuntary modes of orienting that are differently manifested in biased competition.
Article
The evolutionary origins of complex capacities such as musicality are not simple, and likely involved many interacting steps of musicality-specific adaptations, exaptations, and cultural creation. A full account of the origins of musicality needs to consider the role of ancient adaptations such as credible singing, auditory scene analysis, and prediction-reward circuits in constraining the emergence of musicality.
Article
Savage et al. argue for musicality as having evolved for the overarching purpose of social bonding. By way of contrast, we highlight contemporary predictive processing models of human cognitive functioning in which the production and enjoyment of music follows directly from the principle of prediction error minimization.
Article
We propose that not social bonding, but rather a different mechanism underlies the development of musicality: being unable to survive alone. The evolutionary constraint of being dependent on other humans for survival provides the ultimate driving force for acquiring human faculties such as sociality and musicality, through mechanisms of learning and neural plasticity. This evolutionary mechanism maximizes adaptation to a dynamic environment.
Article
The two target articles agree that processes of cultural evolution generate richness and diversity in music, but neither address this question in a focused way. We sketch one way to proceed – and hence suggest how the target articles differ not only in empirical claims, but also in their tacit, prior assumptions about the relationship between cognition and culture.
Chapter
Full-text available
Article
Full-text available
As we experience a temporal flux of events our expectations of future events change. Such expectations seem to be central to our perception of affect in music, but we have little understanding of how expectations change as recent information is integrated. When music establishes a pitch centre (tonality), we rapidly learn to anticipate its continuation. What happens when anticipations are challenged by new events? Here we show that providing a melodic challenge to an established tonality leads to progressive changes in the impact of the features of the stimulus on listeners' expectations. The results demonstrate that retrospective analysis of recent events can establish new patterns of expectation that converge towards probabilistic interpretations of the temporal stream. These studies point to wider applications of understanding the impact of information flow on future prediction and its behavioural utility.
Article
Full-text available
Drug-associated cues can acquire powerful motivational control over the behavior of addicts, and can contribute to relapse via multiple, dissociable mechanisms. Most preclinical models of relapse focus on only one of these mechanisms: the ability of drug cues to reinforce drug-seeking actions following a period of extinction training. However, in addicts, drug cues typically do not follow seeking actions; they precede them. They often produce relapse by evoking a conditioned motivational state ("wanting" or "craving") that instigates and/or invigorates drug-seeking behavior. Here we used a conflict-based relapse model to ask whether individual variation in the propensity to attribute incentive salience to reward cues predicts variation in the ability of a cocaine cue to produce conditioned motivation (craving) for cocaine. Following self-administration training, responding was curtailed by requiring rats to cross an electrified floor to take cocaine. The subsequent response-independent presentation of a cocaine-associated cue was sufficient to reinstate drug-seeking behavior, despite the continued presence of the adverse consequence. Importantly, there were large individual differences in the motivational properties of the cocaine cue, which were predicted by variation in the propensity to attribute incentive salience to a food cue. Finally, a dopamine antagonist injected into the nucleus accumbens core attenuated, and amphetamine facilitated, cue-evoked cocaine seeking, implicating dopamine signaling in cocaine cue-evoked craving. These data provide a promising preclinical approach for studying sources of individual variation in susceptibility to relapse due to conditioned craving and implicate mesolimbic dopamine in this process.
Article
Full-text available
Predictions about future rewarding events have a powerful influence on behaviour. The phasic spike activity of dopamine-containing neurons, and corresponding dopamine transients in the striatum, are thought to underlie these predictions, encoding positive and negative reward prediction errors. However, many behaviours are directed towards distant goals, for which transient signals may fail to provide sustained drive. Here we report an extended mode of reward-predictive dopamine signalling in the striatum that emerged as rats moved towards distant goals. These dopamine signals, which were detected with fast-scan cyclic voltammetry (FSCV), gradually increased or-in rare instances-decreased as the animals navigated mazes to reach remote rewards, rather than having phasic or steady tonic profiles. These dopamine increases (ramps) scaled flexibly with both the distance and size of the rewards. During learning, these dopamine signals showed spatial preferences for goals in different locations and readily changed in magnitude to reflect changing values of the distant rewards. Such prolonged dopamine signalling could provide sustained motivational drive, a control mechanism that may be important for normal behaviour and that can be impaired in a range of neurologic and neuropsychiatric disorders.
Article
Full-text available
FEELINGS IN RESPONSE TO music are often accompanied by measurable bodily reactions such as goose bumps or shivers down the spine, commonly called "chills." In order to investigate distinct acoustical and musical structural elements related to chill reactions, reported chill reactions and bodily reactions were measured continuously. Chill reactions did not show a simple stimulus-response pattern or depend on personality traits, such as. low sensation seeking and high reward dependence. Musical preferences and listening situations also played a role in chill reactions. Participants seemed to react to musical patterns, not to mere acoustical triggers. The entry of a voice and changes in volume were shown to be the most reactive patterns. These results were also confirmed by a retest experiment.
Article
Full-text available
We explored how musical culture shapes one's listening experience. Western participants heard a series of tones drawn from either the Western major mode (culturally familiar) or the Indian thaat Bhairav (culturally unfamiliar) and then heard a test tone. They made a speeded judgment about whether the test tone was present in the prior series of tones. Interactions between mode (Western or Indian) and test tone type (congruous or incongruous) reflect the utilization of Western modal knowledge to make judgments about the test tones. False alarm rates were higher for test tones congruent with the major mode than for test tones congruent with Bhairav. In contrast, false alarm rates were lower for test tones incongruent with the major mode than for test tones incongruent with Bhairav. These findings suggest that one's internalized cultural knowledge may drive musical expectancies when listening to music of an unfamiliar modal system.
Article
Full-text available
Evaluating series of complex sounds like those in speech and music requires sequential comparisons to extract task-relevant relations between subsequent sounds. With the present functional magnetic resonance imaging (fMRI) study, we investigated whether sequential comparison of a specific acoustic feature within pairs of tones leads to a change in lateralized processing in the auditory cortex (AC) of humans. For this we used the active categorization of the direction (up vs. down) of slow frequency modulated (FM) tones. Several studies suggest that this task is mainly processed in the right AC. These studies, however, tested only the categorization of the FM direction of each individual tone. In the present study we ask the question whether the right lateralized processing changes when, in addition, the FM direction is compared within pairs of successive tones. For this we use an experimental approach involving contralateral noise presentation in order to explore the contributions made by the left and right AC in the completion of the auditory task. This method has already been applied to confirm the right-lateralized processing of the FM direction of individual tones. In the present study, the subjects were required to perform, in addition, a sequential comparison of the FM direction in pairs of tones. The results suggest a division of labor between the two hemispheres such that the FM direction of each individual tone is mainly processed in the right AC whereas the sequential comparison of this feature between tones in a pair is probably performed in the left AC.
Article
Full-text available
The sound of music may arouse profound emotions in listeners. But such experiences seem to involve a 'paradox', namely that music - an abstract form of art, which appears removed from our concerns in everyday life - can arouse emotions - biologically evolved reactions related to human survival. How are these (seemingly) non-commensurable phenomena linked together? Key is to understand the processes through which sounds are imbued with meaning. It can be argued that the survival of our ancient ancestors depended on their ability to detect patterns in sounds, derive meaning from them, and adjust their behavior accordingly. Such an ecological perspective on sound and emotion forms the basis of a recent multi-level framework that aims to explain emotional responses to music in terms of a large set of psychological mechanisms. The goal of this review is to offer an updated and expanded version of the framework that can explain both 'everyday emotions' and 'aesthetic emotions'. The revised framework - referred to as BRECVEMA - includes eight mechanisms: Brain Stem Reflex, Rhythmic Entrainment, Evaluative Conditioning, Contagion, Visual Imagery, Episodic Memory, Musical Expectancy, and Aesthetic Judgment. In this review, it is argued that all of the above mechanisms may be directed at information that occurs in a 'musical event' (i.e., a specific constellation of music, listener, and context). Of particular significance is the addition of a mechanism corresponding to aesthetic judgments of the music, to better account for typical 'appreciation emotions' such as admiration and awe. Relationships between aesthetic judgments and other mechanisms are reviewed based on the revised framework. It is suggested that the framework may contribute to a long-needed reconciliation between previous approaches that have conceptualized music listeners' responses in terms of either 'everyday emotions' or 'aesthetic emotions'.
Article
Full-text available
The mismatch negativity (MMN), an event-related potential (ERP) representing the violation of an acoustic regularity, is considered as a pre-attentive change detection mechanism at the sensory level on the one hand and as a prediction error signal on the other hand, suggesting that bottom-up as well as top-down processes are involved in its generation. Rhythmic and melodic deviations within a musical sequence elicit a MMN in musically trained subjects, indicating that acquired musical expertise leads to better discrimination accuracy of musical material and better predictions about upcoming musical events. Expectation violations to musical material could therefore recruit neural generators that reflect top-down processes that are based on musical knowledge. We describe the neural generators of the musical MMN for rhythmic and melodic material after a short-term sensorimotor-auditory (SA) training. We compare the localization of musical MMN data from two previous MEG studies by applying beamformer analysis. One study focused on the melodic harmonic progression whereas the other study focused on rhythmic progression. The MMN to melodic deviations revealed significant right hemispheric neural activation in the superior temporal gyrus (STG), inferior frontal cortex (IFC), and the superior frontal (SFG) and orbitofrontal (OFG) gyri. IFC and SFG activation was also observed in the left hemisphere. In contrast, beamformer analysis of the data from the rhythm study revealed bilateral activation within the vicinity of auditory cortices and in the inferior parietal lobule (IPL), an area that has recently been implied in temporal processing. We conclude that different cortical networks are activated in the analysis of the temporal and the melodic content of musical material, and discuss these networks in the context of the dual-pathway model of auditory processing.
Article
Full-text available
Activation of mu opioid receptors within the ventral tegmental area (VTA) can produce reward through the inhibition of GABAergic inputs. GABAergic neurons in the ventral pallidum (VP) provide a major input to VTA neurons. To determine the specific VTA neuronal targets of VP afferents and their sensitivity to mu opioid receptor agonists, we virally expressed channel rhodopsin (ChR2) in rat VP neurons and optogenetically activated their terminals in the VTA. Light activation of VP neuron terminals elicited GABAergic IPSCs in both dopamine (DA) and non-DA VTA neurons, and these IPSCs were inhibited by the mu opioid receptor agonist DAMGO. In addition, using a fluorescent retrograde marker to identify VTA-projecting VP neurons, we found them to be hyperpolarized by DAMGO. Both of these actions decrease GABAergic input onto VTA neurons, revealing two mechanisms by which endogenous or exogenous opioids can activate VTA neurons, including DA neurons.
Article
Full-text available
This study investigates the functional neuroanatomy of harmonic music perception with functional magnetic resonance imaging (fMRI). We presented short pieces of Western classical music to nonmusicians. The ending of each piece was systematically manipulated in the following four ways: Standard Cadence (expected resolution), Deceptive Cadence (moderate deviation from expectation), Modulated Cadence (strong deviation from expectation but remaining within the harmonic structure of Western tonal music), and Atonal Cadence (strongest deviation from expectation by leaving the harmonic structure of Western tonal music). Music compared with baseline broadly recruited regions of the bilateral superior temporal gyrus (STG) and the right inferior frontal gyrus (IFG). Parametric regressors scaled to the degree of deviation from harmonic expectancy identified regions sensitive to expectancy violation. Areas within the BG were significantly modulated by expectancy violation, indicating a previously unappreciated role in harmonic processing. Expectancy violation also recruited bilateral cortical regions in the IFG and anterior STG, previously associated with syntactic processing in other domains. The posterior STG was not significantly modulated by expectancy. Granger causality mapping found functional connectivity between IFG, anterior STG, posterior STG, and the BG during music perception. Our results imply the IFG, anterior STG, and the BG are recruited for higher-order harmonic processing, whereas the posterior STG is recruited for basic pitch and melodic processing.
Article
Full-text available
Recent work has advanced our knowledge of phasic dopamine reward prediction error signals. The error signal is bidirectional, reflects well the higher order prediction error described by temporal difference learning models, is compatible with model-free and model-based reinforcement learning, reports the subjective rather than physical reward value during temporal discounting and reflects subjective stimulus perception rather than physical stimulus aspects. Dopamine activations are primarily driven by reward, and to some extent risk, whereas punishment and salience have only limited activating effects when appropriate controls are respected. The signal is homogeneous in terms of time course but heterogeneous in many other aspects. It is essential for synaptic plasticity and a range of behavioural learning situations.
Article
Full-text available
Humans are able to find and tap to the beat of musical rhythms varying in complexity from children's songs to modern jazz. Musical beat has no one-to-one relationship with auditory features-it is an abstract perceptual representation that emerges from the interaction between sensory cues and higher-level cognitive organization. Previous investigations have examined the neural basis of beat processing but have not tested the core phenomenon of finding and tapping to the musical beat. To test this, we used fMRI and had musicians find and tap to the beat of rhythms that varied from metrically simple to metrically complex-thus from a strong to a weak beat. Unlike most previous studies, we measured beat tapping performance during scanning and controlled for possible effects of scanner noise on beat perception. Results showed that beat finding and tapping recruited largely overlapping brain regions, including the superior temporal gyrus (STG), premotor cortex, and ventrolateral pFC (VLPFC). Beat tapping activity in STG and VLPFC was correlated with both perception and performance, suggesting that they are important for retrieving, selecting, and maintaining the musical beat. In contrast BG activity was similar in all conditions and was not correlated with either perception or production, suggesting that it may be involved in detecting auditory temporal regularity or in associating auditory stimuli with a motor response. Importantly, functional connectivity analyses showed that these systems interact, indicating that more basic sensorimotor mechanisms instantiated in the BG work in tandem with higher-order cognitive mechanisms in pFC.
Article
Full-text available
Depth-electrode recordings from the auditory cortex of humans undergoing presurgical evaluation for epilepsy allow the recording of ensemble responses to pitch in the form of local field potentials. These recordings allow another test of the hypothesis that there is a specialized neural ensemble for pitch within auditory cortex. Moreover, the technique allows recordings from multiple sites with millisecond temporal resolution to allow modeling of the effective connectivity between these sites. Here we argue that this takes the form of a hierarchical network of pitch-sensitive regions. Activity can be understood as reflecting predictive coding, in which perceptual predictions and error messages are continuously exchanged between a higher pitch center and lower-level auditory cortex.
Article
Full-text available
The perception of a melody is invariant to the absolute properties of its constituting notes, but depends on the relation between them-the melody's relative pitch profile. In fact, a melody's "Gestalt" is recognized regardless of the instrument or key used to play it. Pitch processing in general is assumed to occur at the level of the auditory cortex. However, it is unknown whether early auditory regions are able to encode pitch sequences integrated over time (i.e., melodies) and whether the resulting representations are invariant to specific keys. Here, we presented participants different melodies composed of the same 4 harmonic pitches during functional magnetic resonance imaging recordings. Additionally, we played the same melodies transposed in different keys and on different instruments. We found that melodies were invariantly represented by their blood oxygen level-dependent activation patterns in primary and secondary auditory cortices across instruments, and also across keys. Our findings extend common hierarchical models of auditory processing by showing that melodies are encoded independent of absolute pitch and based on their relative pitch profile as early as the primary auditory cortex.
Article
Full-text available
Experimental investigations of cross-cultural music perception and cognition reported during the past decade are described. As globalization and Western music homogenize the world musical environment, it is imperative that diverse music and musical contexts are documented. Processes of music perception include grouping and segmentation, statistical learning and sensitivity to tonal and temporal hierarchies, and the development of tonal and temporal expectations. The interplay of auditory, visual, and motor modalities is discussed in light of synchronization and the way music moves via emotional response. Further research is needed to test deep-rooted psychological assumptions about music cognition with diverse materials and groups in dynamic contexts. Although empirical musicology provides keystones to unlock musical structures and organization, the psychological reality of those theorized structures for listeners and performers, and the broader implications for theories of music perception and cognition, awaits investigation.
Article
Full-text available
In this paper, we present two novel perspectives on the function of the left inferior frontal gyrus (LIFG). First, a structured sequence processing perspective facilitates the search for functional segregation within the LIFG and provides a way to express common aspects across cognitive domains including language, music and action. Converging evidence from functional magnetic resonance imaging and transcranial magnetic stimulation studies suggests that the LIFG is engaged in sequential processing in artificial grammar learning, independently of particular stimulus features of the elements (whether letters, syllables or shapes are used to build up sequences). The LIFG has been repeatedly linked to processing of artificial grammars across all different grammars tested, whether they include non-adjacent dependencies or mere adjacent dependencies. Second, we apply the sequence processing perspective to understand how the functional segregation of semantics, syntax and phonology in the LIFG can be integrated in the general organization of the lateral prefrontal cortex (PFC). Recently, it was proposed that the functional organization of the lateral PFC follows a rostro-caudal gradient, such that more abstract processing in cognitive control is subserved by more rostral regions of the lateral PFC. We explore the literature from the viewpoint that functional segregation within the LIFG can be embedded in a general rostro-caudal abstraction gradient in the lateral PFC. If the lateral PFC follows a rostro-caudal abstraction gradient, then this predicts that the LIFG follows the same principles, but this prediction has not yet been tested or explored in the LIFG literature. Integration might provide further insights into the functional architecture of the LIFG and the lateral PFC.
Article
Full-text available
We used functional magnetic resonance imaging to investigate the neural basis of the mere exposure effect in music listening, which links previous exposure to liking. Prior to scanning, participants underwent a learning phase, where exposure to melodies was systematically varied. During scanning, participants rated liking for each melody and, later, their recognition of them. Participants showed learning effects, better recognising melodies heard more often. Melodies heard most often were most liked, consistent with the mere exposure effect. We found neural activations as a function of previous exposure in bilateral dorsolateral prefrontal and inferior parietal cortex, probably reflecting retrieval and working memory-related processes. This was despite the fact that the task during scanning was to judge liking, not recognition, thus suggesting that appreciation of music relies strongly on memory processes. Subjective liking per se caused differential activation in the left hemisphere, of the anterior insula, the caudate nucleus, and the putamen.
Article
Full-text available
Perception of temporal patterns is critical for speech, movement, and music. In the auditory domain, perception of a regular pulse, or beat, within a sequence of temporal intervals is associated with basal ganglia activity. Two alternative accounts of this striatal activity are possible: "searching" for temporal regularity in early stimulus processing stages or "prediction' of the timing of future tones after the beat is found (relying on continuation of an internally generated beat). To resolve between these accounts, we used functional magnetic resonance imaging (fMRI) to investigate different stages of beat perception. Participants heard a series of beat and nonbeat (irregular) monotone sequences. For each sequence, the preceding sequence provided a temporal beat context for the following sequence. Beat sequences were preceded by nonbeat sequences, requiring the beat to be found anew ("beat finding" condition), or by beat sequences with the same beat rate ("beat continuation"), or a different rate ("beat adjustment"). Detection of regularity is highest during beat finding, whereas generation and prediction are highest during beat continuation. We found the greatest striatal activity for beat continuation, less for beat adjustment, and the least for beat finding. Thus, the basal ganglia's response profile suggests a role in beat prediction, not in beat finding.
Article
Full-text available
We used fMRI to investigate the neuronal correlates of encoding and recognizing heard and imagined melodies. Ten participants were shown lyrics of familiar verbal tunes; they either heard the tune along with the lyrics, or they had to imagine it. In a subsequent surprise recognition test, they had to identify the titles of tunes that they had heard or imagined earlier. The functional data showed substantial overlap during melody perception and imagery, including secondary auditory areas. During imagery compared with perception, an extended network including pFC, SMA, intraparietal sulcus, and cerebellum showed increased activity, in line with the increased processing demands of imagery. Functional connectivity of anterior right temporal cortex with frontal areas was increased during imagery compared with perception, indicating that these areas form an imagery-related network. Activity in right superior temporal gyrus and pFC was correlated with the subjective rating of imagery vividness. Similar to the encoding phase, the recognition task recruited overlapping areas, including inferior frontal cortex associated with memory retrieval, as well as left middle temporal gyrus. The results present new evidence for the cortical network underlying goal-directed auditory imagery, with a prominent role of the right pFC both for the subjective impression of imagery vividness and for on-line mental monitoring of imagery-related activity in auditory areas.
Article
Full-text available
The ventromedial prefrontal cortex (vmPFC) comprises a set of interconnected regions that integrate information from affective sensory and social cues, long-term memory, and representations of the 'self'. Alhough the vmPFC is implicated in a variety of seemingly disparate processes, these processes are organized around a common theme. The vmPFC is not necessary for affective responses per se, but is critical when affective responses are shaped by conceptual information about specific outcomes. The vmPFC thus functions as a hub that links concepts with brainstem systems capable of coordinating organism-wide emotional behavior, a process we describe in terms of the generation of affective meaning, and which could explain the common role played by the vmPFC in a range of experimental paradigms.
Article
The present study uses music as a tool to induce emotion, and functional magnet resonance imaging (fMRI) to determine neural correlates of emotion processing. We found that listening to pleasant music activated the larynx representation in the rolandic operculum. The larynx is the source of vocal sound, and involved in the production of melody, rhythm, and emotional modulation of the vocal timbre during vocalization. The activation of the larynx is reminiscent of the activation of premotor areas during the observation of grasping movements and might indicate that a system for the perception-action mediation which has been reported for the visual domain also exists in the auditory domain.
Article
To make sound economic decisions, the brain needs to compute several different value-related signals. These include goal values that measure the predicted reward that results from the outcome generated by each of the actions under consideration, decision values that measure the net value of taking the different actions, and prediction errors that measure deviations from individuals' previous reward expectations. We used functional magnetic resonance imaging and a novel decision-making paradigm to dissociate the neural basis of these three computations. Our results show that they are supported by different neural substrates: goal values are correlated with activity in the medial orbitofrontal cortex, decision values are correlated with activity in the central orbitofrontal cortex, and prediction errors are correlated with activity in the ventral striatum.
Book
Music's ability to express and arouse emotions is a mystery that has fascinated both experts and laymen at least since ancient Greece. The predecessor to this book, Motion and Emotion (OUP, 2001) was critically and commercially successful and stimulated much further work in this area. In the years since the publication of that book, empirical research in this area has blossomed, and the successor to Music and Emotion reflects the considerable activity in this area. The Handbook of Music and Emotion offers an 'up-to-date' account of this vibrant domain. It provides comprehensive coverage of the many approaches that may be said to define the field of music and emotion, in all its breadth and depth. The first section offers multi-disciplinary perspectives on musical emotions from philosophy, musicology, psychology, neurobiology, anthropology, and sociology. The second section features methodologically-oriented chapters on the measurement of emotions via different channels (e.g., self report, psychophysiology, neuroimaging). Sections three and four address how emotion enters into different aspects of musical behavior, both the making of music and its consumption. Section five covers developmental, personality, and social factors. Section six describes the most important applications involving the relationship between music and emotion. In a final commentary, the editors comment on the history of the field, summarize the current state of affairs, as well as propose future directions for the field. The only book of its kind, the Handbook of Music and Emotion will fascinate music psychologists, musicologists, music educators, philosophers, and others with an interest in music and emotion (e.g. in marketing, health, engineering, film, and the game industry). It will be a valuable resource for established researchers in the field, a developmental aid for early-career researchers and postgraduate research students, and a compendium to assist students at various levels. In addition, as with its predecessor, it will also interest from practicing musicians and lay readers fascinated by music and emotion.
Article
Music listening is highly pleasurable and important part of most people's lives. Because music has no obvious importance for survival, the ubiquity of music remains puzzling and the brain processes underlying this attraction to music are not well understood. Like other rewards (such as food, sex, and money), pleasurable music activates structures in the dopaminergic reward system, but how music manages to tap into the brain's reward system is less clear. Here we propose a novel framework for understanding musical pleasure, suggesting that music conforms to the recent concept of pleasure cycles with phases of “wanting/expectation,” “liking,” and “learning.” We argue that expectation is fundamental to musical pleasure, and that music can be experienced as pleasurable both when it fulfills and violates expectations. Dopaminergic neurons in the midbrain represent expectations and violations of expectations (prediction errors) in response to “rewards” and “alert/incentive salience signals.” We argue that the human brain treats music as an alert/incentive salience signal, and suggest that the activity of dopamine neurons represents aspects of the phases of musical expectation and musical learning, but not directly the phase of music liking. Finally, we propose a computational model for understanding musical anticipation and pleasure operationalized through the recent theory of predictive coding. (PsycINFO Database Record (c) 2013 APA, all rights reserved)
Article
Music is a universal feature of human societies, partly owing to its power to evoke strong emotions and influence moods. During the past decade, the investigation of the neural correlates of music-evoked emotions has been invaluable for the understanding of human emotion. Functional neuroimaging studies on music and emotion show that music can modulate activity in brain structures that are known to be crucially involved in emotion, such as the amygdala, nucleus accumbens, hypothalamus, hippocampus, insula, cingulate cortex and orbitofrontal cortex. The potential of music to modulate activity in these structures has important implications for the use of music in the treatment of psychiatric and neurological disorders.
Article
We investigated neural correlates of musical feature processing with a decoding approach. To this end, we used a method that combines computational extraction of musical features with regularized multiple regression (LASSO). Optimal model parameters were determined by maximizing the decoding accuracy using a leave-one-out cross-validation scheme. The method was applied to functional magnetic resonance imaging (fMRI) data that were collected using a naturalistic paradigm, in which participants' brain responses were recorded while they were continuously listening to pieces of real music. The dependent variables comprised musical feature time series that were computationally extracted from the stimulus. We expected timbral features to obtain a higher prediction accuracy than rhythmic and tonal ones. Moreover, we expected the areas significantly contributing to the decoding models to be consistent with areas of significant activation observed in previous research using a naturalistic paradigm with fMRI. Of the six musical features considered, five could be significantly predicted for the majority of participants. The areas significantly contributing to the optimal decoding models agreed to a great extent with results obtained in previous studies. In particular, areas in superior temporal gyrus, Heschl's gyrus, Rolandic operculum, and cerebellum contributed to the decoding of timbral features. For the decoding of the rhythmic feature, we found the bilateral superior temporal gyrus, right Heschl's gyrus, and hippocampus to contribute most. The tonal feature, however, could not be significantly predicted, suggesting a higher inter-participant variability in its neural processing. A subsequent classification experiment revealed that segments of the stimulus could be classified from the fMRI data with significant accuracy. The present findings provide compelling evidence for the involvement of the auditory cortex, the cerebellum and the hippocampus in the processing of musical features during continuous listening to music.
Article
Humans are extremely good at detecting anomalies in sensory input. For example, while listening to a piece of Western-style music, an anomalous key change or an out-of-key pitch is readily apparent, even to the non-musician. In this paper we investigate differences between musical experts and non-experts during musical anomaly detection. Specifically, we analyzed the electroencephalograms (EEG) of five expert cello players and five non-musicians while they listened to excerpts of J.S. Bach's Prelude from Cello Suite No. 1. All subjects were familiar with the piece, though experts also had extensive experience playing the piece. Subjects were told that anomalous musical events (AMEs) could occur at random within the excerpts of the piece and were told to report the number of AMEs after each excerpt. Furthermore, subjects were instructed to remain still while listening to the excerpts and their lack of movement was verified via visual and EEG monitoring. Experts had significantly better behavioral performance (i.e. correctly reporting AME counts) than non-experts, though both groups had mean accuracies greater than 80%. These group differences were also reflected in the EEG correlates of key-change detection post-stimulus, with experts showing more significant, greater magnitude, longer periods of, and earlier peaks in condition-discriminating EEG activity than novices. Using the timing of the maximum discriminating neural correlates, we performed source reconstruction and compared significant differences between cellists and non-musicians. We found significant differences that included a slightly right lateralized motor and frontal source distribution. The right lateralized motor activation is consistent with the cortical representation of the left hand - i.e. the hand a cellist would use, while playing, to generate the anomalous key-changes. In general, these results suggest that sensory anomalies detected by experts may in fact be partially a result of an embodied cognition, with a model of the action for generating the anomaly playing a role in its detection.
Article
We used functional magnetic resonance imaging to investigate neural processes when music gains reward value the first time it is heard. The degree of activity in the mesolimbic striatal regions, especially the nucleus accumbens, during music listening was the best predictor of the amount listeners were willing to spend on previously unheard music in an auction paradigm. Importantly, the auditory cortices, amygdala, and ventromedial prefrontal regions showed increased activity during listening conditions requiring valuation, but did not predict reward value, which was instead predicted by increasing functional connectivity of these regions with the nucleus accumbens as the reward value increased. Thus, aesthetic rewards arise from the interaction between mesolimbic reward circuitry and cortical networks involved in perceptual analysis and valuation.