ArticlePublisher preview available

Neural Entrainment to the Rhythmic Structure of Music

The MIT Press
Journal of Cognitive Neuroscience
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract and Figures

The neural resonance theory of musical meter explains musical beat tracking as the result of entrainment of neural oscillations to the beat frequency and its higher harmonics. This theory has gained empirical support from experiments using simple, abstract stimuli. However, to date there has been no empirical evidence for a role of neural entrainment in the perception of the beat of ecologically valid music. Here we presented participants with a single pop song with a superimposed bassoon sound. This stimulus was either lined up with the beat of the music or shifted away from the beat by 25% of the average interbeat interval. Both conditions elicited a neural response at the beat frequency. However, although the on-the-beat condition elicited a clear response at the first harmonic of the beat, this frequency was absent in the neural response to the off-the-beat condition. These results support a role for neural entrainment in tracking the metrical structure of real music and show that neural meter tracking can be disrupted by the presentation of contradictory rhythmic cues.
This content is subject to copyright.
Neural Entrainment to the Rhythmic Structure of Music
Adam Tierney and Nina Kraus
Abstract
The neural resonance theory of musical meter explains
musical beat tracking as the result of entrainment of neural
oscillations to the beat frequency and its higher harmonics. This
theory has gained empirical support from experiments using
simple, abstract stimuli. However, to date there has been no
empirical evidence for a role of neural entrainment in the
perception of the beat of ecologically valid music. Here we
presented participants with a single pop song with a super-
imposed bassoon sound. This stimulus was either lined up with
the beat of the music or shifted away from the beat by 25% of
the average interbeat interval. Both conditions elicited a neural
response at the beat frequency. However, although the on-the-
beat condition elicited a clear response at the first harmonic of
the beat, this frequency was absent in the neural response to
the off-the-beat condition. These results support a role for neural
entrainment in tracking the metrical structure of real music and
show that neural meter tracking can be disrupted by the pre-
sentation of contradictory rhythmic cues.
INTRODUCTION
Temporal patterns in music are organized metrically, with
stronger and weaker beats alternating. This alternation
takes place on multiple timescales, resulting in a complex
sequence of stronger and weaker notes. Position within
the metrical hierarchy affects how listeners perceive
sounds; strong metrical positions are associated with
higher goodness-of-fit judgments and enhanced duration
discrimination (Palmer & Krumhansl, 1990). The musical
beat is perceived where strong positions at multiple time-
scales coincide, although individual differences exist in
the scale at which listeners perceive the beat (Iversen
& Patel, 2008; Drake, Jones, & Baruch, 2000).
Metrical processing begins early in life: Brain responses
to rhythmic sounds in newborn infants are modulated by
each soundʼs position in the metrical hierarchy (Winkler,
Haden, Ladinig, Sziller, & Honing, 2009). Metrical per-
ception is, therefore, a fundamental musical skill, and as
such there have been numerous attempts to model how
listeners track metrical structure. An influential model
proposes a bank of neural oscillators entraining to the
beat (Velasco & Large, 2011; Large, 2000, 2008; Van Noorden
& Moelants, 1999; Large & Kolen, 1994), resulting in saliency
oscillating on multiple timescales (Barnes & Jones, 2000;
Large & Jones, 1999). This model is supported by work
showing that beta oscillations are modulated at the rate
of presentation of rhythmic stimuli (Fujioka, Trainor,Large,
& Ross, 2012), possibly reflecting auditorymotor cou-
pling, as well as work showing enhanced perceptual dis-
crimination and detection when stimuli are aligned with
a perceived beat (Bolger, Trost, & Schön, 2013; Miller,
Carlson, & McAuley, 2013; Escoffier, Sheng, & Schirmer,
2010; McAuley & Jones, 2003; Jones, Moynihan, MacKenzie,
& Puente, 2002; Barnes & Jones, 2000).
There is, however, no direct evidence for neural en-
trainment to metrical structure in real music. (We define
neural entrainmentin this paper as phase-locking of
neural oscillations to the rhythmic structure of music.)
Most investigations of the neural correlates of rhythm pro-
cessing have used simple stimuli such as tone sequences
and compared evoked responses to stimuli in strong and
weak metrical positions. Studies of simple stimuli have
found that strong metrical percepts are associated with
larger evoked potentials and higher-amplitude evoked
and induced beta and gamma oscillations (Schaefer, Vlek,
& Desain, 2011; Vlek, Gielen, Farquhar, & Desain, 2011;
Fujioka, Zendel, & Ross, 2010; Geiser, Sandmann, Jäncke,
& Meyer, 2010; Abecasis, Brochard, del Río, Dufour, &
Ortiz, 2009; Iversen, Repp, & Patel, 2009; Ladinig, Honing,
Háden, & Winkler, 2009; Potter, Fenwick, Abecasis, &
Brochard, 2009; Winkler et al., 2009; Pablos Martin et al.,
2007; Abecasis, Brochard, Granot, & Drake, 2005; Snyder
& Large, 2005; Brochard, Abecasis, Potter, Ragot, & Drake,
2003). Studies of simple stimuli have also demonstrated
neural entrainment to a perceived beat and its harmonics
(Nozaradan, Peretz, & Mouraux, 2012; Nozaradan, Peretz,
Missal, & Mouraux, 2011). Furthermore, a recent study
has shown that alignment with the beat of real, ecologically
valid music modulates evoked responses to a stimulus
(Tierney & Kraus, 2013a) such that on-the-beat stimuli
elicit larger P1 responses; however, this result can either
be attributed to enhancement of processing of the target
stimulus or to neural tracking of the beat of the music.
Thus, no study to date has demonstrated neural entrain-
ment to the rhythmic structure of real music.
Northwestern University
© 2014 Massachusetts Institute of Technology Journal of Cognitive Neuroscience 27:2, pp. 400408
doi:10.1162/jocn_a_00704
Downloaded from http://mitprc.silverchair.com/jocn/article-pdf/27/2/400/1823442/jocn_a_00704.pdf by MIT Libraries user on 17 May 2021
... Because the extant literature on the ITL is solely based on the observation of behavioral outcomes (e.g., participants' conscious responses in two-alternative forced choice tasks or recall-based tasks), the neural basis of the loud-first principle of the ITL remains unclear. Recent neuroscientific research on the processing of rhythmic patterns in music (Tal et al., 2017;Doelling & Poeppel, 2015;Tierney & Kraus, 2015) and speech constituents in continuous speech (Zoefel & Kösem, 2024;Ding, Melloni, Zhang, Tian, & Poeppel, 2016Di Liberto, O'Sullivan, & Lalor, 2015) may provide some clues to elucidate this question. In music, the perception of rhythm is facilitated by neural activity phase-locked to isochronous sequences of pulses (beats) and higher-level rhythmic groupings (meter) in the acoustic signal (Tal et al., 2017;Tierney & Kraus, 2015). ...
... Recent neuroscientific research on the processing of rhythmic patterns in music (Tal et al., 2017;Doelling & Poeppel, 2015;Tierney & Kraus, 2015) and speech constituents in continuous speech (Zoefel & Kösem, 2024;Ding, Melloni, Zhang, Tian, & Poeppel, 2016Di Liberto, O'Sullivan, & Lalor, 2015) may provide some clues to elucidate this question. In music, the perception of rhythm is facilitated by neural activity phase-locked to isochronous sequences of pulses (beats) and higher-level rhythmic groupings (meter) in the acoustic signal (Tal et al., 2017;Tierney & Kraus, 2015). Similarly, when listening to running speech, neural populations phase-lock to the amplitude modulation of speech constituents in the speech signal (Ding et al., 2016;Ding & Simon, 2014;Nourski & Brugge, 2011). ...
... Altogether, our findings are consistent with previous neuroscientific literature on the sensory representation of music (Tal et al., 2017;Doelling & Poeppel, 2015;Tierney & Kraus, 2015) and speech (Ding, Patel, et al., 2017;Ding et al., 2016;Di Liberto et al., 2015) constituents. This literature suggests that the integration of speech units into higher-order constituents is supported by a temporal code that operates at multiple time scales. ...
Article
Full-text available
The perception of rhythmic patterns is crucial for the recognition of words in spoken languages, yet it remains unclear how these patterns are represented in the brain. Here, we tested the hypothesis that rhythmic patterns are encoded by neural activity phase-locked to the temporal modulation of these patterns in the speech signal. To test this hypothesis, we analyzed EEGs evoked with long sequences of alternating syllables acoustically manipulated to be perceived as a series of different rhythmic groupings in English. We found that the magnitude of the EEG at the syllable and grouping rates of each sequence was significantly higher than the noise baseline, indicating that the neural parsing of syllables and rhythmic groupings operates at different timescales. Distributional differences between the scalp topographies associated with each timescale suggests a further mechanistic dissociation between the neural segmentation of syllables and groupings. In addition, we observed that the neural tracking of louder syllables, which in trochaic languages like English are associated with the beginning of rhythmic groupings, was more robust than the neural tracking of softer syllables. The results of further bootstrapping and brain–behavior analyses indicate that the perception of rhythmic patterns is modulated by the magnitude of grouping alternations in the neural signal. These findings suggest that the temporal coding of rhythmic patterns in stress-based languages like English is supported by temporal regularities that are linguistically relevant in the speech signal.
... Notably, we observed comparable evidence of neural categorizaFon when limiFng the analysis to responses averaged across 9 fronto-central channels (Fig. S4). This pool of channels was selected based on the fact that they have been shown to consistently capture EEG responses to repeaFng acousFc rhythms across previous studies 36, [54][55][56] . Analogously, these channels also showed the highest overall response magnitude averaged across all parFcipants and condiFons in the current study (Fig. S3). ...
... Next, we examined the locaFon of the categorical boundary separaFng the two rhythm categories observed in the EEG and tap-force RSMs. Across parFcipants, the locaFon of this boundary in the best-fitng categorical model was remarkably similar for the EEG (median boundary at raFo 0. 56 This result was further corroborated by considering all theoreFcal models of categorizaFon differing in the posiFon of the category boundary, rather than a single best-fitng model. Indeed, the distribuFon of correlaFon coefficients obtained across all categorical models separately for each parFcipant showed marked similarity between the neural and behavioral (tap-force) responses (mean ! ...
Preprint
Full-text available
Humans across cultures show an outstanding capacity to perceive, learn, and produce musical rhythms. These skills rely on mapping the infinite space of possible rhythmic sensory inputs onto a finite set of internal rhythm categories. What are the brain processes underlying rhythm categorization? We used electroencephalography (EEG) to measure brain activity as human participants listened to a continuum of rhythmic sequences characterized by repeating patterns of two inter-onset intervals. Using frequency and representational similarity analyses, we show that brain activity does not merely track the temporal structure of rhythmic inputs, but, instead, automatically produces categorical representation of rhythms. Surprisingly, despite this automaticity, these rhythm categories do not arise in the earliest stages of the ascending auditory pathway, but show strong similarity between implicit neural and overt behavioral responses. Together, these results and methodological advances constitute a critical step towards understanding the biological roots and diversity of musical behaviors across cultures.
... Under DAT, decreasing entrainment in the auditory system leads to worse sensorimotor synchronization and groove is envisioned as embodied neural entrainment that codes temporal expectations in the premotor cortex (Zalta et al., 2024). In our study, decreasing ITC could straightforwardly reflect worsening entrainment in the auditory cortex in line with previous studies (Large et al., 2015;Mathias et al., 2020;Tierney & Kraus, 2015). Sustained pupillary activity, on the other hand, could reflect the mobilization of additional oscillators in the (pre)motor system required to maintain entrainment to the moderately complex rhythm or, alternatively, the additional cognitive resources needed to couple these two sets of oscillators. ...
... Specifically, we wanted to ensure that we had enough repetitions per condition to capture the typically small neurophysiological effects we expected while keeping the experiment itself under an hour long. Based on past EEG and pupillometry studies, we decided that three drumbeats that we had already validated in terms of groove and pleasure ratings would be sufficient (Bowling et al., 2019;Doelling & c o r t e x 1 7 4 ( 2 0 2 4 ) 1 3 7 e1 4 8 Poeppel, 2015; Fujioka et al., 2015;Mathias et al., 2020;Nozaradan et al., 2016;Stupacher, Witte, et al., 2016;Tierney & Kraus, 2015). In any case, it remains possible that other musical stimuli might not elicit the effects we observed and so future work with these methods should employ more varied and different musical stimuli. ...
Article
Full-text available
Attention is not constant but rather fluctuates over time and these attentional fluctuations may prioritize the processing of certain events over others. In music listening, the pleasurable urge to move to music (termed ‘groove’ by music psychologists) offers a particularly convenient case study of oscillatory attention because it engenders synchronous and oscillatory movements which also vary predictably with stimulus complexity. In this study, we simultaneously recorded pupillometry and scalp electroencephalography (EEG) from participants while they listened to drumbeats of varying complexity that they rated in terms of groove afterwards. Using the intertrial phase coherence of the beat frequency, we found that while subjects were listening, their pupil activity became entrained to the beat of the drumbeats and this entrained attention persisted in the EEG even as subjects imagined the drumbeats continuing through subsequent silent periods. This entrainment in both the pupillometry and EEG worsened with increasing rhythmic complexity, indicating poorer sensory precision as the beat became more obscured. Additionally, sustained pupil dilations revealed the expected, inverted U-shaped relationship between rhythmic complexity and groove ratings. Taken together, this work bridges oscillatory attention to rhythmic complexity in relation to musical groove.
... Among the variety of methods that have been proposed to capture periodic recurrence (see Lenc et al., 2021 for a review), frequency-tagging has been increasingly adopted in the neuroscience community over the last decade (Bouvet et al., 2020;Celma-Miralles et al., 2016;Celma-Miralles & Toro, 2019;CSifuentes-Ortega et al., 2022;Cirelli et al., 2016;Lenc et al., 2018Lenc et al., , 2020Lenc et al., , 2022Li et al., 2019;Nozaradan, Mouraux, et al., 2016;Nozaradan, Peretz, & Keller, 2016;Nozaradan et al., 2011Nozaradan et al., , 2012Nozaradan et al., 2018;Okawa et al., 2017;Sifuentes-Ortega et al., 2022;Tal et al., 2017;Tierney & Kraus, 2014). ...
Article
Experiencing music often entails the perception of a periodic beat. Despite being a widespread phenomenon across cultures, the nature and neural underpinnings of beat perception remain largely unknown. In the last decade, there has been a growing interest in developing methods to probe these processes, particularly to measure the extent to which beat‐related information is contained in behavioral and neural responses. Here, we propose a theoretical framework and practical implementation of an analytic approach to capture beat‐related periodicity in empirical signals using frequency‐tagging. We highlight its sensitivity in measuring the extent to which the periodicity of a perceived beat is represented in a range of continuous time‐varying signals with minimal assumptions. We also discuss a limitation of this approach with respect to its specificity when restricted to measuring beat‐related periodicity only from the magnitude spectrum of a signal and introduce a novel extension of the approach based on autocorrelation to overcome this issue. We test the new autocorrelation‐based method using simulated signals and by re‐analyzing previously published data and show how it can be used to process measurements of brain activity as captured with surface EEG in adults and infants in response to rhythmic inputs. Taken together, the theoretical framework and related methodological advances confirm and elaborate the frequency‐tagging approach as a promising window into the processes underlying beat perception and, more generally, temporally coordinated behaviors.
... While it is still debated whether such brain rhythms emerged from the natural properties of speech and music or whether rhythm in speech and music evolved around this functional cortical architecture 32 , a functional relevance has been proposed. Endogenous brain rhythms may support predictive processing and event segmentation by entraining to the rhythmic temporal modulations in the speech 20,[33][34][35] and the music signal 18,[36][37][38] . Speech research emphasized the role of auditory cortex brain rhythms in the theta range (~4.5 Hz) that are proposed to constrain temporal processing 20,22,39,40 . ...
Article
Full-text available
Speech and music might involve specific cognitive rhythmic timing mechanisms related to differences in the dominant rhythmic structure. We investigate the influence of different motor effectors on rate-specific processing in both domains. A perception and a synchronization task involving syllable and piano tone sequences and motor effectors typically associated with speech (whispering) and music (finger-tapping) were tested at slow (~2 Hz) and fast rates (~4.5 Hz). Although synchronization performance was generally better at slow rates, the motor effectors exhibited specific rate preferences. Finger-tapping was advantaged compared to whispering at slow but not at faster rates, with synchronization being effector-dependent at slow, but highly correlated at faster rates. Perception of speech and music was better at different rates and predicted by a fast general and a slow finger-tapping synchronization component. Our data suggests partially independent rhythmic timing mechanisms for speech and music, possibly related to a differential recruitment of cortical motor circuitry.
... Although music used in therapeutic applications may have additional musical features present (ie., melody, harmony, lyrics, sub-beats, etc.), researchers have shown the ability for individuals to spontaneously organize musical stimuli according to the meter and move to the stimulus (Burger et al., 2018). Furthermore, neural entrainment to the metric organization of musical stimuli has been shown in adults (Tierney and Kraus, 2015) and in children as young as 8 months (Cantiani et al., 2022). Researchers have suggested that the neural response to meterrelated frequencies occurs both at the subcortical and cortical level, with functional connections between the auditory cortex and motor structures (Nozaradan et al., 2018). ...
Article
Full-text available
Emerging research suggests that music and rhythm-based interventions offer promising avenues for facilitating functional outcomes for autistic individuals. Evidence suggests that many individuals with ASD have music processing and production abilities similar to those of neurotypical peers. These individual strengths in music processing and production may be used within music therapy with a competence-based treatment approach. We provide an updated perspective of how music and rhythm-based interventions promote sensory and motor regulation, and how rhythm and music may then impact motor, social, and communicative skills. We discuss how music can engage and motivate individuals, and can be used intentionally to promote skill acquisition through both structured and flexible therapeutic applications. Overall, we illustrate the potential of music and rhythm as valuable tools in addressing skill development in individuals on the autism spectrum.
... Given that previous researches have observed stronger neural synchronization in musical pulse frequency and its harmonics frequencies (Tierney and Kraus, 2015;Tichko et al., 2022), and considering the differences in preferred music in our study, we focused on analyzing the neural entrainment to both the pulse frequency and its corresponding harmonic frequency of individualized music. Initially, we identified the pulse frequency of individualized music by extracting the peak frequency from the power spectrum of the music envelope. ...
Article
Full-text available
Objective This study aimed to determine whether patients with disorders of consciousness (DoC) could experience neural entrainment to individualized music, which explored the cross-modal influences of music on patients with DoC through phase-amplitude coupling (PAC). Furthermore, the study assessed the efficacy of individualized music or preferred music (PM) versus relaxing music (RM) in impacting patient outcomes, and examined the role of cross-modal influences in determining these outcomes. Methods Thirty-two patients with DoC [17 with vegetative state/unresponsive wakefulness syndrome (VS/UWS) and 15 with minimally conscious state (MCS)], alongside 16 healthy controls (HCs), were recruited for this study. Neural activities in the frontal–parietal network were recorded using scalp electroencephalography (EEG) during baseline (BL), RM and PM. Cerebral-acoustic coherence (CACoh) was explored to investigate participants’ abilitiy to track music, meanwhile, the phase-amplitude coupling (PAC) was utilized to evaluate the cross-modal influences of music. Three months post-intervention, the outcomes of patients with DoC were followed up using the Coma Recovery Scale-Revised (CRS-R). Results HCs and patients with MCS showed higher CACoh compared to VS/UWS patients within musical pulse frequency (p = 0.016, p = 0.045; p < 0.001, p = 0.048, for RM and PM, respectively, following Bonferroni correction). Only theta-gamma PAC demonstrated a significant interaction effect between groups and music conditions (F(2,44) = 2.685, p = 0.036). For HCs, the theta-gamma PAC in the frontal–parietal network was stronger in the PM condition compared to the RM (p = 0.016) and BL condition (p < 0.001). For patients with MCS, the theta-gamma PAC was stronger in the PM than in the BL (p = 0.040), while no difference was observed among the three music conditions in patients with VS/UWS. Additionally, we found that MCS patients who showed improved outcomes after 3 months exhibited evident neural responses to preferred music (p = 0.019). Furthermore, the ratio of theta-gamma coupling changes in PM relative to BL could predict clinical outcomes in MCS patients (r = 0.992, p < 0.001). Conclusion Individualized music may serve as a potential therapeutic method for patients with DoC through cross-modal influences, which rely on enhanced theta-gamma PAC within the consciousness-related network.
... Az első, mely szerint a fentnevezett ritmikai készségek szoros összefüggésben, egyetlen átfogó kompetenciaként alkotnak egy egységes egészet. Ebben az esetben -elméletbenmindezen konstruktum méréséhez mindössze egy ritmikai feladat elegendő lenne (Tierney & Kraus, 2015a, 2015b. Ezt az összefoglaló elképzelést több tanulmány is alátámasztja, amelyek különféle ritmikai készség tesztek közötti korrelációt kimutatva sugallják az egyes mért készségek közös, összevont halmazát (Fitch & Rosenfeld, 2007;Fujii & Schlaug, 2013;Pecenka & Keller, 2011). ...
Thesis
Full-text available
A zenei tevékenységek számos pozitív transzferhatásuk mellett örömforrást is jelentenek. Az iskolai zenei képességfejlesztés elsősorban az ének-zene órákon történik, mely az általános zenei műveltség megalapozásának fő színtere. Az ének-zene órák kínálnak tehát alkalmat minden gyermekeknek ahhoz, hogy a zenei ismeretek elsajátítása, valamint az esztétikai érzékük finomodása mellett a ritmus és mozgás összekapcsolásával a felszabadultság, az öröm érzését élhessék át. Ugyanakkor tapasztalatunk alapján a hazai ének-zene oktatás gyakorlata, módszertana a ritmusjátékokkal való összevetés alapján nagyobb hangsúlyt helyez az énekes zenei tevékenységekre, inkább éneklésközpontú. Szintén az iskolai ének-zene órákon nyílhat lehetőség arra is, hogy a tanulók a zenét és a zenei tevékenységeket megkedveljék, azonban a korábbi vizsgálatok alapján az ének-zene órák nem tartoznak a kedvelt tanórák közé. Kutatásunk fókuszában a ritmikai készségek fejlődés-vizsgálata áll első és második osztályos tanulók körében. Ennek részeként megvizsgáljuk a hagyományos – nem emelt óraszámban ének-zenét tartalmazó – tanterv szerint tanuló gyermekek ritmikai készségfejlődésének jellemzőit az általános iskola első két évfolyamán. Kutatásunk második része a ritmikai készségek – osztálytermi körülmények között megvalósuló – játékos fejlesztési lehetőségeire vonatkozik. A ritmikai készségek hatékony fejlesztésének lehetőségét, illetve a ritmikai játékok rendszeres gyakorlásának az ének-zene tantárgyi attitűdre gyakorolt hatásait két kísérlettel vizsgáltuk első osztályos tanulók körében. Két, szisztematikus felépítésű, egy tanév során megvalósítható, rendszeres fejlesztésen alapuló programot alakítottunk ki. Olyan módszereket alkalmaztunk, amelyek lehetővé teszik a tanterv által előírt tananyag elsajátítását az ének-zene oktatás heti két óra időtartamában, könnyen alkalmazhatók, mindemellett elősegíthetik a ritmikai készségek hatékony fejlesztését. Fejlesztőprogramunk létrehozásának célja – a várható ritmikai készségfejlődés mellett – az is, hogy módszertani segítséget nyújtsunk az ének-zenét tanító alsós tanítóknak, akik munkájukhoz ezáltal egy rendszerezett, kibővült eszköztárat kaphatnak. Longitudinális vizsgálatunk eredményei alapján a ritmikai készségeket reprezentáló auditív ritmikai képesség szignifikáns, mérsékelt ütemű, lassuló intenzitású fejlődést mutat az általános iskola első két évében. Kimutattuk, hogy a ritmusészlelés és a ritmusreprodukció készségei eltérő mértékben fejlődnek a vizsgált évfolyamokon. Az auditív ritmikai képesség fejlődési jellemzőit, szerkezetének változásait faktoranalízissel tártuk fel. Eredményeink az általános iskola első két évében a ritmikai készségek folyamatos átrendeződését jelzik, a második évben pedig a tempóészlelés készségének stabilizálódására következtethetünk. Első kísérletünk eredményei azt mutatják, hogy a ritmikai játék-feladatbank alkalmazása a kísérleti csoportban (n=141) nem befolyásolta a kontrollcsoportnál (n=117) szignifikánsan nagyobb mértékben a képességek fejlődését, ugyanakkor kis mértékben hozzájárult az alacsony fejlettségű tanulók felzárkózásához. A fejlesztőprogram sikeresen elősegítette az ének-zene tantárgy iránti pozitív attitűd kialakulását, hozzájárulva a kísérleti csoportban a kontrollcsoportnál szignifikánsan pozitívabb attitűdhöz. Második kísérletünkben, az intenzív ritmikai fejlesztőprogram alkalmazása eredményeként a kísérleti csoport (n=90) szignifikánsan magasabb fejlődést ért el minden vizsgált zenei képesség terén, mint a kontrollcsoport (n=128). Az intenzív ritmikai fejlesztőprogramot alkalmazó kísérleti csoport tanulói szignifikánsan jobban kedvelték az ének-zene órákat, mint a kontrollcsoport tanulói. Longitudinális vizsgálatunk és két első osztályos tanulók körében végzett kísérletünk alátámasztja a korábbi kutatási eredményeket, mely szerint a 6-8 éves kor a ritmikai készségek fejlődésének szenzitív időszaka. Emellett azt is kimutattuk, hogy a ritmikai készségek első osztályban jelentős mértékben fejleszthetők a tanórába integrált intenzív és rendszeres ritmikai fejlesztés alkalmazásával. Musical activities, in addition to their many positive transfer effects, are also a source of joy. The development of musical skills at school takes place primarily in music lessons, which are the main stage for the foundation of general musical literacy. Therefore, school music lessons offer all children the opportunity to experience the feeling of freedom and joy by connecting rhythm and movement, acquiring musical knowledge, and refining their aesthetic sense. At the same time, based on our experience, the practice and methodology of Hungarian music education in elementary schools put more emphasis on singing musical activities, and less on rhythmic activities. School music lessons can also provide opportunities for students to like music and musical activities more, however, based on previous studies, music lessons are not one of the most popular lessons in Hungary. In our research, we aimed to examine the development of rhythmic skills among first and second-grade students. In a longitudinal study, we examined the characteristics of the rhythmic skill development of children studying according to the traditional curriculum in the first two grades of elementary school (n=205). Our two other experimental studies concern the playful development possibilities of rhythmic skills in classroom conditions, in which we examined the possibility of effective development of rhythmic skills and the effects of regular practice of rhythmic games on the attitude towards music lessons (n1=258, n2=218). Based on our longitudinal study results, the auditory rhythmic ability which represents rhythmic skills shows a significant, moderate development with a slowing intensity in the first two years of elementary school. We have shown also that rhythm perception and rhythm reproduction develop to a different degree in the examined grades. The developmental characteristics of the auditory rhythmic ability and the changes in its structure were revealed by factor analysis. In the first two years of elementary school, our results show a continuous reorganization of rhythmic skills and in the second year, we can conclude that the ability to perceive tempo stabilizes. The results of our first experiment show that the application of the first rhythmic development program did not affect the development of abilities in the experimental group (n=141) to a significantly greater extent than in the control group (n=117), but at the same time, it slightly helped the students with low development to catch up. The development program was successful in promoting a positive attitude toward music lessons, contributing to a significantly more positive attitude in the experimental group than in the control group. As a result of our second experiment, the application of an intensive rhythmic development program, the experimental group (n=90) achieved significantly higher development in all tested musical abilities than the control group (n=128). The students of the experimental group using the intensive rhythmic development program liked the music lessons significantly more than the students of the control group. Our longitudinal study and two experiments conducted in first grade support the previous research results, according to which the age of 6–8 is a sensitive period for the development of rhythmic skills. In addition, we have also shown that rhythmic skills can be significantly improved in the first grade with intensive and regular development integrated into the lesson.
Preprint
Full-text available
Pitch and time are the essential dimensions defining musical melody. Recent electrophysiological studies have explored the neural encoding of musical pitch and time by leveraging probabilistic models of their sequences, but few have studied how the features might interact. This study examines these interactions by introducing “chimeric music,” which pairs two distinct melodies, and exchanges their pitch contours and note onset-times to create two new melodies, thereby distorting musical pattern while maintaining the marginal statistics of the original pieces’ pitch and temporal sequences. Through this manipulation, we aimed to dissect the music processing and the interaction between pitch and time. Employing the temporal response function (TRF) framework, we analyzed the neural encoding of melodic expectation and musical downbeats in participants with varying levels of musical training. Our findings revealed differences in the encoding of melodic expectation between original and chimeric stimuli in both dimensions, with a significant impact of musical experience. This suggests that the structural violation due to decoupling the pitch and temporal structure affect expectation processing. In our analysis of downbeat encoding, we found an enhanced neural response when participants heard a note that aligned with the downbeat during music listening. In chimeric music, responses to downbeats were larger when the note was also a downbeat in the original music that provided the pitch sequence, indicating an effect of pitch structure on beat perception. This study advances our understanding of the neural underpinnings of music, emphasizing the significance of pitch-time interaction in the neural encoding of music. Significance Statement Listening to music is a complex and multidimensional auditory experience. Recent studies have investigated the neural encoding of pitch and timing sequences in musical structure, but they have been studied independently. This study addresses the gap in understanding of how the interaction between pitch and time affects their encoding. By introducing “chimeric music,” which decouples these dimensions in melodies, we investigate how this interaction influences the neural activities using EEG. Leveraging and the temporal response function (TRF) framework, we found that structural violations in pitch-time interactions impact musical expectation processing and beat perception. These results advance our knowledge of how the brain processes complex auditory stimuli like music, underscoring the critical role of pitch and time interactions in music perception.
Article
Sensorimotor synchronization (SMS) is the temporal coordination of motor movements with external or imagined stimuli. Finger‐tapping studies indicate better SMS performance with auditory or tactile stimuli compared to visual. However, SMS with a visual rhythm can be improved by enriching stimulus properties (e.g., spatiotemporal content) or individual differences (e.g., one's vividness of auditory imagery). We previously showed that higher self‐reported vividness of auditory imagery led to more consistent synchronization–continuation performance when participants continued without a guiding visual rhythm. Here, we examined the contribution of imagery to the SMS performance of proficient imagers , including an auditory or visual distractor task during the continuation phase. While the visual distractor task had minimal effect, SMS consistency was significantly worse when the auditory distractor task was present. Our electroencephalography analysis revealed beat‐related neural entrainment, only when the visual or auditory distractor tasks were present. During continuation with the auditory distractor task, the neural entrainment showed an occipital electrode distribution, suggesting the involvement of visual imagery. Unique to SMS continuation with the auditory distractor task, we found neural and sub‐vocal (measured with electromyography) entrainment at the three‐beat pattern frequency. In this most difficult condition, proficient imagers employed both beat‐ and pattern‐related imagery strategies. However, this combination was insufficient to restore SMS consistency to that observed with visual or no distractor task. Our results suggest that proficient imagers effectively utilized beat‐related imagery in one modality when imagery in another modality was limited.
Article
Full-text available
Investigations of the psychological representation for musical meter provided evidence for an internalized hierarchy from 3 sources: frequency distributions in musical compositions, goodness-of-fit judgments of temporal patterns in metrical contexts, and memory confusions in discrimination judgments. The frequency with which musical events occurred in different temporal locations differentiates one meter from another and coincides with music-theoretic predictions of accent placement. Goodness-of-fit judgments for events presented in metrical contexts indicated a multileveled hierarchy of relative accent strength, with finer differentiation among hierarchical levels by musically experienced than inexperienced listeners. Memory confusions of temporal patterns in a discrimination task were characterized by the same hierarchy of inferred accent strength. These findings suggest mental representations for structural regularities underlying musical meter that influence perceiving, remembering, and composing music.
Article
Full-text available
A number of phenomena related to the perception of isochronous tone sequences peak at a certain rate (or tempo) and taper off at both slower and faster rates. In the present paper we start from the hypothesis that the peaking finds its origin in the presence of a damped resonating oscillator in the perceptual-motor system. We assume that for pulse perception only the 'effective' resonance curve matters, i.e., the enhancement of the amplitude of the oscillator beyond the critical damping. On the basis of the effective resonance curve, analyses have been made of data of Vos (1973) on subjective rhythmization and of data on tapping along isochronous tone sequences (Parncutt, 1994) and polyrhythmic sequences (Handel & Oshinsky, 1981). The results show that these data can be very well approximated with the proposed model. The best results are obtained with a resonance period of 500-550 ms and a width at half height of about 400-800 ms. A comparison is made with a number of other tempo related phenomena. In the second part a preliminary effort is made to determine the distribution of perceived tempi of musical pieces heard on the radio and in recordings of several styles, by having a number of listeners tapping along these pieces. The resonance curve appears to be a good tool to characterize these distributions.
Article
Full-text available
The auditory steady state potentials may be an important technique in objective audiometry. The effects of stimulus rate, intensity, and tonal frequency on these potentials were investigated using both signal averaging and on-line Fourier analysis. Stimulus presentation rates of 40 to 45/sec result in a 40 Hz sinusoidal response which is about twice the amplitude of the 10 and 60/sec responses. No significant effects of subject age or sex were seen. The 40/sec response shows a linear decrease in amplitude and a linear increase in latency when stimulus intensity is decreased from 90 to 20 dB normal hearing level. This response is recordable to within a few decibels of behavioral threshold. Stimuli of different tonal frequency give similar amplitude/rate functions, with absolute amplitude decreasing with increasing tonal frequency. Signal averaging and Fourier analysis provide nearly identical amplitude/rate, amplitude/intensity, and latency/intensity functions. Both methods of analysis may be used, therefore, to record the 40 Hz steady state potential. Fourier analysis, however, may be the faster and less expensive method. Furthermore, techniques ("zoom") are available with Fourier analysis to study the effects of varying stimulus parameters on-line with the Fourier analysis procedure.
Article
Full-text available
Abstract The cognitive strategies by which humans process complex, metrically-ambiguous rhythmic patterns remain poorly understood. We investigated listeners' abilities to perceive, process and produce complex, syncopated rhythmic patterns played against a regular sequence of pulses. Rhythmic complexity,was varied along a continuum; complexity,was quantified using an objective metric of syncopation suggested by Longuet-Higgins and Lee. We used a recognition memory,task to assess the immediate,and longer-term perceptual salience and memorability,of rhythmic,patterns. The tasks required subjects to (a) tap in time to the rhythms, (b) reproduce these same rhythm patterns given a steady pulse, and (c) recognize these patterns when replayed both immediately after the other tasks, and after a 24-hour delay. Subjects tended to reset the phase of their internally generated pulse with highly complex, syncopated rhythms, often pursuing a strategy of reinterpreting or "re-hearing" the rhythm as less syncopated. Thus, greater complexity in rhythmic stimuli leads to a reorganization of the cognitive representation of the temporal structure of events. Less complex rhythms,were also more,robustly encoded,into long-term memory,than more,complex,syncopated rhythms,in the delayed memory,task. 3
Article
Full-text available
Even within equitonal isochronous sequences, listeners report perceiving differences among the tones, reflecting some grouping and accenting of the sound events. In a previous study, we explored this phenomenon of �subjective rhythmization� physiologically through brain event-related potentials (ERPs). We found differences in the ERP responses to small intensity deviations introduced in different positions of isochronous sequences, even though all sound events were physically identical. These differences seemed to follow a binary pattern, with larger amplitudes in the response elicited by deviants in odd-numbered than in even-numbered positions. The experiments reported here were designed to test whether the differences observed corresponded to a metrical pattern, by using a similar design in sequences of a binary (long-short) or a ternary (long-short-short) meter. We found a similar pattern of results in the binary condition, but a significantly different pattern in the ternary one. Importantly, the amplitude of the ERP response was largest in positions corresponding to strong beats in all conditions. These results support the notion of a binary default metrical pattern spontaneously imposed by listeners, and a better processing of the first (accented) event in each perceptual group. The differences were mainly observed in a late, attention-dependent component of the ERPs, corresponding to rather high-level processing.
Article
The beneficial effects of musical training are not limited to enhancement of musical skills, but extend to language skills. Here, we review evidence that musical training can enhance reading ability. First, we discuss five subskills underlying reading acquisition-phonological awareness, speech-in-noise perception, rhythm perception, auditory working memory, and the ability to learn sound patterns-and show that each is linked to music experience. We link these five subskills through a unifying biological framework, positing that they share a reliance on auditory neural synchrony. After laying this theoretical groundwork for why musical training might be expected to enhance reading skills, we review the results of longitudinal studies providing evidence for a role for musical training in enhancing language abilities. Taken as a whole, these findings suggest that musical training can provide an effective developmental educational strategy for all children, including those with language learning impairments.