Article

Neural Entrainment to the Rhythmic Structure of Music

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The neural resonance theory of musical meter explains musical beat tracking as the result of entrainment of neural oscillations to the beat frequency and its higher harmonics. This theory has gained empirical support from experiments using simple, abstract stimuli. However, to date there has been no empirical evidence for a role of neural entrainment in the perception of the beat of ecologically valid music. Here we presented participants with a single pop song with a superimposed bassoon sound. This stimulus was either lined up with the beat of the music or shifted away from the beat by 25% of the average interbeat interval. Both conditions elicited a neural response at the beat frequency. However, although the on-the-beat condition elicited a clear response at the first harmonic of the beat, this frequency was absent in the neural response to the off-the-beat condition. These results support a role for neural entrainment in tracking the metrical structure of real music and show that neural meter tracking can be disrupted by the presentation of contradictory rhythmic cues.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Moreover, the necessity to include events (or omissions thereof) that violate expectations may actually interfere with the beat percept itself. The current study evaluated another possible neural measure that may be used to probe beat perception: steady-state evoked potentials at beat-relevant frequencies (Gilmore & Russo, 2021;Lenc et al., 2018;Nave et al., 2022;Nozaradan et al., 2011Nozaradan et al., , 2012bTierney & Kraus, 2014). ...
... However, we still did not observe attention effects on 1.1-Hz power, though we note that this analysis was underpowered from the start. It is notable that several previous studies found something similar to what we observed here, that condition effects that would theoretically be related to beat perception were stronger at twice (the "harmonic" of) the beat rate (Harding et al., 2019;Kaneshiro et al., 2020;Lenc et al., 2018;Tierney & Kraus, 2014). Although there may indeed be a fundamental reason why attention did not modulate spectral energy at 1.1 Hz (the rate at which most participants tended to tap) in the current study, or why previous studies also observed strongest effects at the first harmonic of the beat, the reason may also be more trivial. ...
... One goal of the current study was to provide a proof-ofprinciple that steady-state evoked potentials related to beat perception would be observed with non-repeating rhythms. To date, most applications of this technique have either been applied to highly repetitive stimuli where the same few bars repeat over and over again (Cirelli et al., 2016;Nozaradan et al., 2012bNozaradan et al., , 2016 or to music (Doelling & Poeppel, 2015;Tierney & Kraus, 2014). We consider the current study an important step, as robust beat perception occurs in response to nonrepeating rhythms, but little is known about the relationship between steady-state evoked potentials and beat perception with rhythms that do not repeat. ...
Article
Full-text available
A growing body of evidence suggests that steady‐state evoked potentials may be a useful measure of beat perception, particularly when obtaining traditional, explicit measures of beat perception is difficult, such as with infants or non‐human animals. Although attending to a stimulus is not necessary for most traditional applications of steady‐state evoked potentials, it is unknown how attention affects steady‐state evoked potentials that arise in response to beat perception. Additionally, most applications of steady‐state evoked potentials to measure beat perception have used repeating rhythms or real music. Therefore, it is unclear how the steady‐state response relates to the robust beat perception that occurs with non‐repeating rhythms. Here, we used electroencephalography to record participants’ brain activity as they listened to non‐repeating musical rhythms while either attending to the rhythms or while distracted by a concurrent visual task. Non‐repeating auditory rhythms elicited steady‐state evoked potentials at perceived beat frequencies (perception was validated in a separate sensorimotor synchronization task) that were larger when participants attended to the rhythms compared to when they were distracted by the visual task. Therefore, although steady‐state evoked potentials appear to index beat perception to non‐repeating musical rhythms, this technique may be limited to when participants are known to be attending to the stimulus.
... Neural activity synchronizes to different types of rhythmic sounds, such as speech and music (Doelling and Poeppel, 2015;Nicolaou et al., 2017;Ding et al., 2017;Kösem et al., 2018) over a wide range of rates. In music, neural activity synchronizes with the beat, the most prominent isochronous pulse in music to which listeners sway their bodies or tap their feet (Tierney and Kraus, 2015;Nozaradan et al., 2012;Large and Snyder, 2009;Doelling and Poeppel, 2015). Listeners show a strong behavioral preference for music with beat rates around 2 Hz (here, we use the term tempo to refer to the beat rate). ...
... This mimics approaches used for studying neural synchronization to speech, where neural activity has been shown to synchronize with the amplitude envelope (Peelle and Davis, 2012), which roughly corresponds to syllabic fluctuations (Doelling et al., 2014), as well as to 'higher order' semantic information (Broderick et al., 2019). Notably, most studies that have examined neural synchronization to musical rhythm have used simplified musical stimuli, such as MIDI melodies (Kumagai et al., 2018) and monophonic melodies (Di Liberto et al., 2020), or rhythmic lines comprising clicks or sine tones (Nozaradan et al., 2012;Nozaradan et al., 2011;Wollman et al., 2020); only a few studies have focused on naturalistic, polyphonic music (Tierney and Kraus, 2015;Madsen et al., 2019;Kaneshiro et al., 2020;Doelling and Poeppel, 2015). 'Higher order' musical features are difficult to compute for naturalistic music, which is typically polyphonic and has complex spectro-temporal properties (Zatorre et al., 2002). ...
... In fact, highest coherence was observed at the first harmonic and not at the stimulation tempo itself ( Figure 2I). This replicates previous work that also showed higher coherence (Kaneshiro et al., 2020) and spectral amplitude (Tierney and Kraus, 2015) at the first harmonic than at the musical beat rate. There are several potential reasons for this finding. ...
Article
Full-text available
Neural activity in the auditory system synchronizes to sound rhythms, and brain–environment synchronization is thought to be fundamental to successful auditory perception. Sound rhythms are often operationalized in terms of the sound’s amplitude envelope. We hypothesized that – especially for music – the envelope might not best capture the complex spectro-temporal fluctuations that give rise to beat perception and synchronized neural activity. This study investigated (1) neural synchronization to different musical features, (2) tempo-dependence of neural synchronization, and (3) dependence of synchronization on familiarity, enjoyment, and ease of beat perception. In this electroencephalography study, 37 human participants listened to tempo-modulated music (1–4 Hz). Independent of whether the analysis approach was based on temporal response functions (TRFs) or reliable components analysis (RCA), the spectral flux of music – as opposed to the amplitude envelope – evoked strongest neural synchronization. Moreover, music with slower beat rates, high familiarity, and easy-to-perceive beats elicited the strongest neural response. Our results demonstrate the importance of spectro-temporal fluctuations in music for driving neural synchronization, and highlight its sensitivity to musical tempo, familiarity, and beat salience.
... Obwohl die Klickstruktur im Vergleich zu leeren Ereignissen für die Filled-Duration Illusion zunächst weniger von Bedeutung ist, ist es durchaus von Bedeutung, ob eine Ereignisstruktur regelmäßig oder unregelmäßig ist. So ist der metrische Takt in der Musik ein endogenes Phänomen, bei dem regelmäßige oder quasi-regelmäßige Ereignisse zu einem gefühlten Puls oder Taktus führt , der sich in regelmäßigen und wiederkehrenden Zyklen organisiert (Large et al., 2015;Large & Palmer, 2002;Phillips-Silver et al., 2011;Tierney & Kraus, 2015). Die Unterscheidung zwischen regelmäßigen Ereignisstrukturen mit einem wahrgenommenen Puls und unregelmäßigen Ereignisstrukturen ohne ein Pulsgefühl ist wichtig, da unterschiedliche neuronale Substrate an der auditiven Verarbeitung beteiligt sind (Teki et al., 2011;Teki et al., 2012;van Wassenhove, 2016 (Repp & Bruttomesso, 2010;Wearden et al., 2007 (Cameron et al., 2003;Kellaris & Kent, 1992;North & Hargreaves, 1999). ...
... beats, measures). The metrical beat in music is an endogenous phenomenon, whereby regular or quasi-regular events give rise to a felt sense of pulse or "tactus" , as well as a sense that these pulses are organized in regularly recurring cycles or measures (Large et al., 2015;Large & Palmer, 2002;Phillips-Silver et al., 2011;Tierney & Kraus, 2015). The distinction between regular event structures with a felt sense of pulse versus irregular event structures with no felt sense of pulse is important, since different neural substrates are involved in the auditory processing of each (Teki et al., 2011). ...
... Perception & Psychophysics, 16(3), 449-458. https://doi.org/10.3758/BF03198571Tierney, A. & Kraus, N. (2015). Neural entrainment to the rhythmic structure of music.Journal of CognitiveNeuroscience, 27(2), 400-408. ...
Thesis
Full-text available
The present dissertation project investigated auditory time perception with regard to the (perceived) tempo properties of music and the spontaneous motor tempo (SMT). The theoretical basis of these investigations are the models of the internal clock, which are based on an intrinsic timekeeper that sends out impulses in a linear or dynamic-oscillatory system. The SMT is considered to be an indicator of the pulse rate of this timekeeper. The aim of this dissertation was to answer the following overarching questions: (1) To what extent does the tempo of music influence the assessment of durations and does this influence depend on the metrical level at which the musical beat is perceived (i.e., perceived tempo)? (2) Which factors influence the SMT independent of external stimuli such as music? (3) What role does musical experience play in auditory time perception of music and are there systematic differences in the pulse rate of the internal clock? In order to answer these questions, four empirical studies were carried out, whereby Studies 1 and 2 investigated the first question and Studies 3 and 4 were devoted to the second question. The studies carried out were able to deepen the understanding of the time-distortion effect of the musical tempo and show that this effect also depends on the individual sensorimotor synchronization. Furthermore, it could be shown that the internal clock, measured with the spontaneous motor tempo, does not only depend on the time of day but also on the respective chronotype, which suggests an influence of the biological clock on the perception of time.
... Music is a highly complex stimuli and consists of several features like melody, timbre, pitch, tonality, harmony, and rhythm. Music has been observed to involve many aspects of neural processing like: all four emotional pathways in the brain [1], synchronization of motor and sensory areas [2], intra-subject neural entrainment [3], and inter-subject correlation [4]. Musical Information Retrieval (MIR) studies have been able to successfully extract different features of music [5], which enables studying specific features and their brain responses. ...
... However due to the highly complex nature of music, this has been mostly observed for pure tones [12]. Tierney and Kraus have observed neural entrainment of rhythmic patterns of naturalistic music, but only on a single song [3]. ...
... These peaks portray the significance and occurence of that rhythmic pattern in the stimuli. As it has been observed that neural entrainment occurs for musical rhythmic sequences [3], our groups would also represent the resonance of the brain responses to those notes (X, 2X or 1/4X). The songs in group X were 1, 2, 3, 5, 10; 2X were 6, 8, 10, 11, 12; and 1/4 X were 7, 9 as shown in Table I. ...
... This rhythmic shift in neural excitability has been proposed to act as a core mechanism of selective attention that optimizes stimulus processing at specific moments in time (in phase with the rhythmic beat) [17][18][19]. Support for this Oscillation Selection Hypothesis comes from numerous studies demonstrating enhanced perceptual processing of stimuli which appear at rhythmically-predicted moments in time [1,17,18,[20][21][22][23][24]. Recent work by Hickey and colleagues (2020) suggests that neural entrainment to low-frequency rhythm also modulates higher-order cognitive processing and the encoding of events into long-term memory [25]. ...
... It has also been suggested that manipulations of temporal attention may have their greatest effect on these later, post-perceptual stages of information processing [32,36,40]. In support of this proposal, effects of temporal orienting on early visual ERP components are not always observed and have been shown to depend on the perceptual demands of the task and the nature of the temporal orienting cues [22,32,40,[41][42][43][44]. Thus, the effect of rhythm on memory encoding could primarily reflect changes in later, post-perceptual stages of information processing. ...
... Importantly, the current results extend this prior work by demonstrating that background, rhythmic cues that occur in a different modality (auditory) than the target stimuli (visual) also modulate post-perceptual ERPs. While other prior work has shown that rhythmic cues can also influence earlier N1 components associated with initial stimulus processing [8,30], the modulation of sensory/perceptual ERPs is not always observed following manipulations of temporal attention and may depend on the nature of the temporal cues or task context [22,40,[41][42][43][44]. For example, in prior studies demonstrating rhythmic effects on the N1, participants performed perceptually-demanding target detection/discrimination tasks [8,30]. ...
Article
Full-text available
Accumulating evidence suggests that rhythmic temporal structures in the environment influence memory formation. For example, stimuli that appear in synchrony with the beat of background, environmental rhythms are better remembered than stimuli that appear out-of-synchrony with the beat. This rhythmic modulation of memory has been linked to entrained neural oscillations which are proposed to act as a mechanism of selective attention that prioritize processing of events that coincide with the beat. However, it is currently unclear whether rhythm influences memory formation by influencing early (sensory) or late (post-perceptual) processing of stimuli. The current study used stimulus-locked event-related potentials (ERPs) to investigate the locus of stimulus processing at which rhythm temporal cues operate in the service of memory formation. Participants viewed a series of visual objects that either appeared in-synchrony or out-of-synchrony with the beat of background music and made a semantic classification (living/non-living) for each object. Participants’ memory for the objects was then tested (in silence). The timing of stimulus presentation during encoding (in-synchrony or out-of-synchrony with the background beat) influenced later ERPs associated with post-perceptual selection and orienting attention in time rather than earlier ERPs associated with sensory processing. The magnitude of post-perceptual ERPs also differed according to whether or not participants demonstrated a mnemonic benefit for in-synchrony compared to out-of-synchrony stimuli, and was related to the magnitude of the rhythmic modulation of memory performance across participants. These results support two prominent theories in the field, the Dynamic Attending Theory and the Oscillation Selection Hypothesis, which propose that neural responses to rhythm act as a core mechanism of selective attention that optimize processing at specific moments in time. Furthermore, they reveal that in addition to acting as a mechanism of early attentional selection, rhythm influences later, post-perceptual cognitive processes as events are transformed into memory.
... This rhythmic 48 shift in neural excitability has been proposed to act as a core mechanism of selective attention 49 that optimizes stimulus processing at specific moments in time (in phase with the rhythmic beat) 50 [14][15][16]. Support for this Oscillation Selection Hypothesis comes from numerous studies 51 demonstrating enhanced perceptual processing of stimuli which appear at rhythmically-predicted 52 moments in time [1,14,15,[17][18][19][20][21]. Recent evidence suggests that neural entrainment to low-53 frequency rhythm also modulates higher-order cognitive processing and the encoding of events 54 into long-term memory [22]. ...
... 88 It has also been suggested that manipulations of temporal attention may have their greatest effect 89 on these later, post-perceptual stages of information processing [30,34,38]. In support of this 90 proposal, effects of temporal orienting on early visual ERP components are not always observed 91 and have been shown to depend on the perceptual demands of the task and the nature of the 6 92 temporal orienting cues [19,30,[38][39][40][41][42]. Thus, the effect of rhythm on memory encoding could 93 primarily reflect changes in later, post-perceptual stages of information processing. ...
... This finding is consistent with prior 411 work demonstrating effects of temporal attention on visually-evoked N2 and P3 components 412 [30,36,37], and reveals that temporal cues provided by musical rhythm influence post-perceptual 413 visual processing. While rhythmic temporal cues have also been shown to influence the 414 amplitude of earlier N1 components associated with initial stimulus processing [7,27], the 415 modulation of sensory/perceptual ERPs is not always observed following manipulations of 416 temporal attention and may depend on the nature of the temporal cues or task context [19,[38][39][40][41][42]. ...
Preprint
Full-text available
Accumulating evidence suggests that rhythmic temporal structures in the environment influence memory formation. For example, stimuli that appear in synchrony with the beat of background, environmental rhythms are better remembered than stimuli that appear out-of-synchrony with the beat. This rhythmic modulation of memory has been linked to entrained neural oscillations which are proposed to act as a mechanism of selective attention by amplifying early sensory responses to events that coincide with the beat. The current study aimed to further test this hypothesis by using event-related potentials (ERPs) to investigate the locus of stimulus processing at which rhythm temporal cues operate in the service of memory formation. Participants incidentally encoded a series of visual objects while passively listening to background, instrumental music with a steady beat. Objects either appeared in-synchrony or out-of-synchrony with the background beat. Participants were then given a surprise subsequent memory test (in silence). The timing of stimulus presentation during encoding (in-synchrony or out-of-synchrony with the background beat) influenced canonical ERPs associated with post-perceptual selection and orienting attention in time. Importantly, post-perceptual ERPs also differed according to whether or not participants demonstrated a mnemonic benefit for in-synchrony compared to out-of-synchrony stimuli, and were related to the magnitude of the rhythmic modulation of memory across participants. These results support two prominent theories in the field, the Dynamic Attending Theory and the Oscillation Selection Hypothesis, which propose that neural responses to rhythm act as a core mechanism of selective attention that optimize processing at specific moments in time. Furthermore, they reveal that in addition to acting as a mechanism of early attentional selection, rhythm influences later, post-perceptual cognitive processes as events are transformed into memory.
... The power spectrum from the passive listening condition 302 served as an independent auditory localizer to identify electrodes at which neural tracking of 303 beat-related rhythms was greatest. Power at the beat frequency and harmonics was greatest over 304 a frontocentral cluster (Fz, Cz, FC1, FC2, shown as black dots on scalp plots in Figure 4A frequency (Nozaradan et al., 2011;Tierney & Kraus, 2015), resulting in a normalized (corrected) 313 power spectra. To determine the significance of EEG responses at the beat frequency (1.25 Hz), 314 normalized power across the frontocentral channels during the passive listening and encoding 315 tasks was tested against zero using a one-tailed t-test to measure neural tracking of the musical 316 rhythm. ...
... Although not traditionally associated with episodic 567 memory, such low-frequency neural responses in the delta range are thought to play a critical 568 role in selective attention by dynamically modulating the excitability of large neuronal 569 populations over time (Cravo, Rohenkohl, Wyart & Nobre, 2013;570 Lakatos et al., 2008;Schroeder & Lakatos, 2009). Importantly, the alignment of low-frequency 571 neural activity to the timing of external rhythms has been shown to optimize processing at 572 specific moments in time (e.g., in-phase with the rhythm) and has been proposed to serve as a 573 fundamental mechanism of temporal attention (Calderone et al., 2014;Doelling & Poeppel, 574 2015;Frey et al., 2015;Haegens & Zion Golumbic, 2018;Henry, Herrmann, & Obleser., 2014;575 Lakatos et al., 2008;Large & Jones, 1999;Mathewson et al., 2012;Morillon & Schroeder, 2015;576 for review, see Schroeder & Lakatos, 2009;Tierney & Kraus, 2015). 577 ...
... the topography commonly observed in the literature for auditory rhythms (e.g., 306 Henry, Herrmann, Kunke, & Obleser, 2017; Nozaradan, Peretz, Missal, & Mouraux, 2011; 307 Nozaradan, Peretz, & Mouraux, 2012a; Nozaradan, Peretz, & Mouraux, 2012b; Stupacher, 308 Witte, Hove, & Wood, 2016;Tierney & Kraus, 2015). Power spectra were averaged across this 309 frontocentral cluster for both the encoding task and passive listening periods, and the 310 contribution of noise was removed at each bin of the frequency spectrum by subtracting the 311 average power measured at two frequency bins (+/-|0.15-0.2| ...
Article
Full-text available
Time is a critical component of episodic memory. Yet it is currently unclear how different types of temporal signals are represented in the brain and how these temporal signals support episodic memory. The current study investigated whether temporal cues provided by low-frequency environmental rhythms influence memory formation. Specifically, we tested the hypothesis that neural tracking of low-frequency rhythm serves as a mechanism of selective attention that dynamically biases the encoding of visual information at specific moments in time. Participants incidentally encoded a series of visual objects while passively listening to background, instrumental music with a steady beat. Objects either appeared in-synchrony or out-of-synchrony with the background beat. Participants were then given a surprise subsequent memory test (in silence). Results revealed significant neural tracking of the musical beat at encoding, evident in increased electrophysiological power and inter-trial phase coherence at the perceived beat frequency (1.25 Hz). Importantly, enhanced neural tracking of the background rhythm at encoding was associated with superior subsequent memory for in-synchrony compared to out-of-synchrony objects at test. Together, these results provide novel evidence that the brain spontaneously tracks low-frequency musical rhythm during naturalistic listening situations, and that the strength of this neural tracking is associated with the effects of rhythm on higher-order cognitive processes such as episodic memory.
... The EEG and stimulus epochs were then transformed to the timefrequency domain using wavelets linearly spaced from 3 to 7 cycles over 0.5-30 Hz with a frequency resolution of 0.5 Hz and a temporal resolution of 10 ms. Neural data were evaluated across all frequencies and electrodes [34,67]. Using the complex values derived from EEG data and stimuli, cerebro-acoustic phase coherence was estimated (see Harding et al. [33] for the formula). ...
... For statistical analysis, in accordance with prior research [57,67] and the topographies of our data (Figure 2), we evaluated theta power at a frontal electrode pool for statistical analysis. Theta oscillation power in this region was averaged. ...
Article
Full-text available
How speech prosody is processed in the brain during language production remains an unsolved issue. The present work used the phrase-recall paradigm to analyze brain oscillation underpinning rhythmic processing in speech production. Participants were told to recall target speeches aloud consisting of verb–noun pairings with a common (e.g., [2+2], the numbers in brackets represent the number of syllables) or uncommon (e.g., [1+3]) rhythmic pattern. Target speeches were preceded by rhythmic musical patterns, either congruent or incongruent, created by using pure tones at various temporal intervals. Electroencephalogram signals were recorded throughout the experiment. Behavioral results in 2+2 target speeches showed a rhythmic priming effect when comparing congruent and incongruent conditions. Cerebral-acoustic coherence analysis showed that neural activities synchronized with the rhythmic patterns of primes. Furthermore, target phrases that had congruent rhythmic patterns with a prime rhythm were associated with increased theta-band (4–8 Hz) activity in the time window of 400–800 ms in both the 2+2 and 1+3 target conditions. These findings suggest that rhythmic patterns can be processed online. Neural activities synchronize with the rhythmic input and speakers create an abstract rhythmic pattern before and during articulation in speech production.
... Such pulse-like meter perceived when listening to music is often manifested at the behavioral level by a spontaneous periodic entrainment of the motor system (e.g., periodically tapping, or bobbing the head to the meter) (Nozaradan 2014). At the neural level, entrainment to musical rhythms can be reflected by activity preferentially synchronized to meter frequencies perceived in the rhythms (Nozaradan et al. , 2012Tierney and Kraus 2014;Meltzer et al. 2015). By using the frequency-tagging approach, studies consistently showed that electroencephalography (EEG) responses to rhythms designed to induce a spontaneous perception of meter are elicited at multiple frequencies, corresponding to the exact same frequencies contained in the acoustic envelope of the rhythms (Nozaradan et al. 2012(Nozaradan et al. , 2017Lenc et al. 2018). ...
... In order to obtain valid estimates of frequency-tagged responses elicited by periodic stimulation, the contribution of stimulation-unrelated residual background noise was reduced by subtracting the average amplitude at neighboring frequency bins relative to each frequency bin of the spectrum (bins 2-5 on both sides), separately for each participant, condition, and electrode. This noise subtraction procedure relies on the assumption that in the absence of periodic EEG responses to stimulation, the amplitude response at a given frequency bin should be similar to the mean amplitude of the surrounding bins Nozaradan et al. 2012Nozaradan et al. , 2018Tierney and Kraus 2014). Noise-subtracted spectra were then averaged across all the scalp EEG channels (Fz, FCz, Cz, C3, and C4) for envelope-following responses analysis and across the 3 midline channels (Fz, FCz, and Cz) for the frequency-following responses analysis. ...
Article
The extent of high-level perceptual processing during sleep remains controversial. In wakefulness, perception of periodicities supports the emergence of high-order representations such as the pulse-like meter perceived while listening to music. Electroencephalography (EEG) frequency-tagged responses elicited at envelope frequencies of musical rhythms have been shown to provide a neural representation of rhythm processing. Specifically, responses at frequencies corresponding to the perceived meter are enhanced over responses at meter-unrelated frequencies. This selective enhancement must rely on higher-level perceptual processes, as it occurs even in irregular (i.e., syncopated) rhythms where meter frequencies are not prominent input features, thus ruling out acoustic confounds. We recorded EEG while presenting a regular (unsyncopated) and an irregular (syncopated) rhythm across sleep stages and wakefulness. Our results show that frequency-tagged responses at meter-related frequencies of the rhythms were selectively enhanced during wakefulness but attenuated across sleep states. Most importantly, this selective attenuation occurred even in response to the irregular rhythm, where meter-related frequencies were not prominent in the stimulus, thus suggesting that neural processes selectively enhancing meter-related frequencies during wakefulness are weakened during rapid eye movement (REM) and further suppressed in non-rapid eye movement (NREM) sleep. These results indicate preserved processing of low-level acoustic properties but limited higher-order processing of auditory rhythms during sleep.
... One possibility is that neural activity synchronizes to low-frequency fluctuations in the amplitude of the attended stream. Prior EEG research has demonstrated that lowfrequency neural activity phase-locks to the temporal structure of non-verbal auditory stimuli ( Nozaradan et al. 2011( Nozaradan et al. , 2012Tierney and Kraus 2014 ;Doelling and Poeppel 2015 ;Cirelli et al. 2016 ;Harding et al. 2019 ), and that manipulating the perceived temporal structure of rhythmically ambiguous stimuli can modulate neural entrainment ( Nozaradan et al. 2011 ). These results are also consistent with theories of neural oscillators resonating to the temporal structure of sound sequences as attention to time waxes and wanes ( Large and Jones 1999 , Large 2008 ), although they could also reflect other neural mechanisms, such as attention-driven enhancement of exogenous neural responses. ...
... Prior research has shown that perception of temporal structure is tied to an increase in low-frequency inter-trial phase locking ( Doelling and Poeppel 2015 ) and spectral power at frequencies prominent in the stimulus ( Nozaradan et al. 2011( Nozaradan et al. , 2012Tierney and Kraus 2014 ;Cirelli et al. 2016 ). Here we show that selective attention to tone sequences is linked to an increase in phase locking at the frequency of tone melody presentation and to a shift in neural phase corresponding to the phase of the attended melody. ...
Article
Full-text available
To extract meaningful information from complex auditory scenes like a noisy playground, rock concert, or classroom, children can direct attention to different sound streams. One means of accomplishing this might be to align neural activity with the temporal structure of a target stream, such as a specific talker or melody. However, this may be more difficult for children with ADHD, who can struggle with accurately perceiving and producing temporal intervals. In this EEG study, we found that school-aged children's attention to one of two temporally-interleaved isochronous tone 'melodies' was linked to an increase in phase-locking at the melody's rate, and a shift in neural phase that aligned the neural responses with the attended tone stream. Children's attention task performance and neural phase alignment with the attended melody were linked to performance on temporal production tasks, suggesting that children with more robust control over motor timing were better able to direct attention to the time points associated with the target melody. Finally, we found that although children with ADHD performed less accurately on the tonal attention task than typically developing children, they showed the same degree of attentional modulation of phase locking and neural phase shifts, suggesting that children with ADHD may have difficulty with attentional engagement rather than attentional selection.
... The resulting power spectra were log-transformed (10*log10, dB conversion) and subsequently averaged across trials for each channel and every participant. Similar to other EEG studies (Zamm et al., 2017;Tierney & Kraus, 2014;Nozaradan et al., 2011Nozaradan et al., , 2012, a noise reduction procedure was then applied to ensure reduced influence of residual spectral noise on each channel, by subtracting from each frequency the mean power at ±3 neighboring frequency bins, corresponding to ±0.1875 Hz. The noise reduction outcomes could yield a flat spectrum centered around 0 if the signal contained only noise or a peak resulting from the noisesubtracted spectrum if the signal contained nonnoise components. ...
... Noise-subtracted spectra were averaged across all channels for each participant (Tierney & Kraus, 2014;Nozaradan et al., 2011Nozaradan et al., , 2012, and PSDs at the target frequencies of 1.89, 0.94, and 2.84 Hz were extracted from each participant's power spectrum and exported for subsequent analyses. Auditory and motor ROIs were defined by the electrodes that displayed maximal PSD across participants in grand-averaged topographies for Listen (averaged across the three Rhythm conditions) and Motor tasks respectively, following Nozaradan et al. (2012); these ROIs were electrode FCz in the Listen task and electrode C3 in the Motor task. ...
Article
Full-text available
We addressed how rhythm complexity influences auditory–motor synchronization in musically trained individuals who perceived and produced complex rhythms while EEG was recorded. Participants first listened to two-part auditory sequences (Listen condition). Each part featured a single pitch presented at a fixed rate; the integer ratio formed between the two rates varied in rhythmic complexity from low (1:1) to moderate (1:2) to high (3:2). One of the two parts occurred at a constant rate across conditions. Then, participants heard the same rhythms as they synchronized their tapping at a fixed rate (Synchronize condition). Finally, they tapped at the same fixed rate (Motor condition). Auditory feedback from their taps was present in all conditions. Behavioral effects of rhythmic complexity were evidenced in all tasks; detection of missing beats (Listen) worsened in the most complex (3:2) rhythm condition, and tap durations (Synchronize) were most variable and least synchronous with stimulus onsets in the 3:2 condition. EEG power spectral density was lowest at the fixed rate during the 3:2 rhythm and greatest during the 1:1 rhythm (Listen and Synchronize). ERP amplitudes corresponding to an N1 time window were smallest for the 3:2 rhythm and greatest for the 1:1 rhythm (Listen). Finally, synchronization accuracy (Synchronize) decreased as amplitudes in the N1 time window became more positive during the high rhythmic complexity condition (3:2). Thus, measures of neural entrainment corresponded to synchronization accuracy, and rhythmic complexity modulated the behavioral and neural measures similarly.
... In a similar paradigm, Tierney and Kraus (2015) investigated neural entrainment to a beat and its subdivisions in ecologically valid music (a popular song). In this study, neural entrainment is defined as the phase-locking of oscillations (mainly beta oscillations in the EEG) to the rhythmic structure of music. ...
... The latter could help explain the effect of a strong beat for mnemonic purposes, where attention is facilitated when paired to the expected beat. More specifically, Tierney and Kraus (2015) suggested that meter and beat tracking in otherwise sensory competing environments could predict verbal processing abilities. If found true, using metrically strong pieces would be particularly important in music therapy for language development, especially in individuals with sensory processing difficulties, such as people with ASD. ...
Article
Full-text available
Introduction: Music therapists have turned to neuroscience for an explanation of the therapeutic effect of music. Following this interest, the present author conducted a narrative review of this emerging topic. Method: The author searched PubMed, PsycInfo, Web of Science, Google Scholar and a university database with “music” and “neuroscience” as search terms, for publications between 2000 and 2015, including only those relevant to music processing. A full-text review was performed, and thematic summaries were compiled. Results: Findings indicate that music is a complex, generative, and recursive phenomenon that uses similar neural networks as other sounds. It generates emotional responses processed sequentially and simultaneously by cortical and subcortical areas (vmPFC, insula, amygdala, thalamus, hippocampus and parahippocampus, hypothalamus, NAc, caudate nucleus, and OFC). Music generates activity in motor areas (premotor, primary motor, basal ganglia, and cerebellum) and also engages higher-order processing. Discussion: Music perception is probably the result of the Gestalt at all levels. Extraneous variables, such as expertise, attitude, mood, environment, and interpersonal relationships can also modify music processing. Further, this literature only pertains to receptive experiences, and not the active involvement common in music therapy. Recommendations for music interventions should consider the complexity of music processing and the limitations of our current technology.
... Neural entrainment is defined as the process whereby brain activity, and more specifically neuronal oscillations measured by electroencephalography (EEG), synchronizes with external (exogenous) stimulus rhythms. In the auditory modality, neural entrainment has been mainly investigated in response to two types of dynamic and rhythmic stimuli: speech and music (e.g., Nozaradan et al., 2011Nozaradan et al., , 2012Nozaradan et al., , 2018Ding and Simon, 2012;Nozaradan, 2014;Tierney and Kraus, 2015;Ding et al., 2016;Zhou et al., 2016;Stupacher et al., 2017;Tal et al., 2017;Lenc et al., 2018;Jin et al., 2020). In these studies, low-frequency (< 6 Hz) neural entrainment has been reliably observed for both physical and abstract properties of the stimuli, such as the rhythms of musical beats and some linguistic constituents. ...
Article
Full-text available
Neural entrainment is defined as the process whereby brain activity, and more specifically neuronal oscillations measured by EEG, synchronize with exogenous stimulus rhythms. Despite the importance that neural oscillations have assumed in recent years in the field of auditory neuroscience and speech perception, in human infants the oscillatory brain rhythms and their synchronization with complex auditory exogenous rhythms are still relatively unexplored. In the present study, we investigate infant neural entrainment to complex non-speech (musical) and speech rhythmic stimuli; we provide a developmental analysis to explore potential similarities and differences between infants’ and adults’ ability to entrain to the stimuli; and we analyze the associations between infants’ neural entrainment measures and the concurrent level of development. 25 8-month-old infants were included in the study. Their EEG signals were recorded while they passively listened to non-speech and speech rhythmic stimuli modulated at different rates. In addition, Bayley Scales were administered to all infants to assess their cognitive, language, and social-emotional development. Neural entrainment to the incoming rhythms was measured in the form of peaks emerging from the EEG spectrum at frequencies corresponding to the rhythm envelope. Analyses of the EEG spectrum revealed clear responses above the noise floor at frequencies corresponding to the rhythm envelope, suggesting that – similarly to adults – infants at 8 months of age were capable of entraining to the incoming complex auditory rhythms. Infants’ measures of neural entrainment were associated with concurrent measures of cognitive and social-emotional development.
... With the development of embodied cognition theory, researchers have started to focus on how the human body functions as a mediator for music cognitive processing, thereby establishing feelings and concepts in psychophysiological and sensory-motor systems (Leman, 2008;Leman & Maes, 2014). A wide range of studies has investigated sensorimotor synchronization (e.g., Goebl & Palmer, 2009;Palmer et al., 2019;Phillips-Silver & Keller, 2012;Phillips-Silver & Trainor, 2005Van Der Steen & Keller, 2013) and neural entrainment (e.g., Large, 2010;Large et al., 2015;Nozaradan et al., 2012Nozaradan et al., , 2015Stupacher et al., 2017;Tierney & Kraus, 2015) in musical ensembles. These studies have extensively revealed how individuals physically coordinate their actions with music rhythm and collaborators and how the brain waves synchronize to the periodic stimuli. ...
Thesis
Full-text available
This master's thesis examines the musicians' cardiac rhythms in string quartet performances. It attempts to capture and demonstrate the cardiac dynamics and synchrony in musical ensembles by analyzing two cases, including a student string quartet (the Borealis String Quartet) and a world-renowned quartet (the Danish String Quartet) performing in different experimental configurations. Two string quartets measured resting heart rate as the Quiet Baseline and repeated Joseph Haydn's String Quartet in B-flat major in conditions that differ in communication constraints such as the Blind, Violin-isolated, Score-directed, Normal, and Concert. Besides, the Danish String Quartet performed an additional Moving Baseline condition in which they played a scale together, as well as a Sight-reading condition involving a music excerpt they had never heard or practiced before. Unlike most previous studies on music and physiological responses, this study employs both linear and nonlinear methods to reveal different aspects of cardiac dynamics from the individual to the group level. Firstly, we observed more predictable individuals' cardiac dynamics during the musical performance than the resting baseline in both quartets. Secondly, group-level synchrony analysis demonstrated that both quartets' cardiac synchrony levels increased during performance conditions relative to the Quiet Baseline. Moreover, the cardiac synchrony level of the Borealis String Quartet was affected to varying degrees by adverse conditions. However, the Danish String Quartet, as an expert group, was more resistant to constraints. Finally, we compared the cardiac synchrony level of the two quartets in identical pairwise conditions. We found the Danish String Quartet has a higher cardiac coupling rate relative to the Borealis String Quartet. Overall, our findings suggest that performing in the string quartet facilitates more predictable cardiac dynamics and synchrony. Different constraints may affect cardiac synchrony to the degree associated with the level of expertise.
... Even if two people listen to the same song, their brain's oscillatory systems may be activated differently, accounting for differences in perceptual and affective elements. Several studies have demonstrated various aspects of neural entrainment, such as beat and pitch recognition [17,18,23]. A recent study discusses differences in music perception between musicians and non-musicians, and reports that audiovisual perception may be broadly correlated with auditory perception. ...
Preprint
Full-text available
We examine user and song identification from neural (EEG) signals. Owing to perceptual subjectivity in human-media interaction, music identification from brain signals is a challenging task. We demonstrate that subjective differences in music perception aid user identification, but hinder song identification. In an attempt to address intrinsic complexities in music identification, we provide empirical evidence on the role of enjoyment in song recognition. Our findings reveal that considering song enjoyment as an additional factor can improve EEG-based song recognition.
... In fact, highest coherence was observed at the first harmonic and not at the 443 stimulation tempo itself (Fig. 2I). This replicates previous work that also showed higher 444 coherence (Kaneshiro et al., 2020) and spectral amplitude (Tierney and Kraus, 2015) at the 445 first harmonic than at the musical beat rate. There are several potential reasons for this 446 . ...
Preprint
Full-text available
Neural activity in the auditory system synchronizes to sound rhythms, and brain environment synchronization is thought to be fundamental to successful auditory perception. Sound rhythms are often operationalized in terms of the sound's amplitude envelope. We hypothesized that, especially for music, the envelope might not best capture the complex spectrotemporal fluctuations that give rise to beat perception and synchronize neural activity. This study investigated 1) neural entrainment to different musical features, 2) tempo dependence of neural entrainment, and 3) dependence of entrainment on familiarity, enjoyment, and ease of beat perception. In this electroencephalography study, 37 human participants listened to tempo modulated music (1 to 4 Hz). Independent of whether the analysis approach was based on temporal response functions (TRFs) or reliable components analysis (RCA), the spectral flux of music, as opposed to the amplitude envelope, evoked strongest neural entrainment. Moreover, music with slower beat rates, high familiarity, and easy to perceive beats elicited the strongest neural response. Based on the TRFs, we could decode music stimulation tempo, but also perceived beat rate, even when the two differed. Our results demonstrate the importance of accurately characterizing musical acoustics in the context of studying neural entrainment, and demonstrate the sensitivity of entrainment to musical tempo, familiarity, and beat salience.
... Here, we propose that MRRL serves as an acoustic stimulus inwardly brought to mind via rhythmic subvocalization. As such, it is bound to timing and bears the potential to be entrained (Di Liberto et al., 2015;Kösem et al., 2018;Kotz et al., 2018, p. 902;Kotz & Schwartze, 2010;Merker et al., 2009;Tierney & Kraus, 2015). Accordingly, we hypothesize that readers pick up MRRL-rhythm when they read with an inner voice and, thus, that they should experience a sense of violation if the accuracy and predictability of MRRL is interrupted. ...
Article
Full-text available
The present study investigates effects of conventionally metered and rhymed poetry on eye-movements in silent reading. Readers saw MRRL poems (i.e., metrically regu-lar, rhymed language) in two layouts. In poem layout, verse endings coincided with line breaks. In prose layout verse endings could be mid-line. We also added metrical and rhyme anomalies. We hypothesized that silently reading MRRL results in build-ing up auditive expectations that are based on a rhythmic “audible gestalt” and pro-pose that rhythmicity is generated through subvocalization. Our results revealed that readers were sensitive to rhythmic-gestalt-anomalies but showed differential effects in poem and prose layouts. Metrical anomalies in particular resulted in robust reading disruptions across a variety of eye-movement measures in the poem layout and caused re-reading of the local context. Rhyme anomalies elicited stronger effects in prose layout and resulted in systematic re-reading of pre-rhymes. The presence or absence of rhythmic-gestalt-anomalies, as well as the layout manipulation, also af-fected reading in general. Effects of syllable number indicated a high degree of subvo-calization. The overall pattern of results suggests that eye-movements reflect, and are closely aligned with, the rhythmic subvocalization of MRRL. This study introduces a two-stage approach to the analysis of long MRRL stimuli and contributes to the discussion of how the processing of rhythm in music and speech may overlap.
... Neural tracking of acoustic speech representations is modulated by attention (Ding and Simon, 2012a;Horton et al., 2014;O'Sullivan et al., 2015;Das et al., 2016) and speech understanding (Vanthornhout et al., 2018;Etard and Reichenbach, 2019;Iotzov and Parra, 2019;Lesenfants et al., 2019). However, the observation of neural speech tracking does not guarantee speech intelligibility, since music (Tierney and Kraus, 2015), and the ignored talker in the two-talker scenario, are also significantly tracked by the brain (Ding and Simon, 2012a;Horton et al., 2014;O'Sullivan et al., 2015). ...
Preprint
When listening to speech, our brain responses time-lock to acoustic events in the stimulus. Recent studies have also reported that cortical responses track linguistic representations of speech. However, tracking of these representations is often described without controlling for acoustic properties. Therefore, the response to these linguistic representations might reflect unaccounted acoustic processing rather than language processing. Here, we evaluated the potential of several recently proposed linguistic representations as neural markers of speech comprehension. To do so, we investigated EEG responses to audiobook speech of 29 participants (22 ♀). We examined whether these representations contribute unique information over and beyond acoustic neural tracking and each other. Indeed, not all of these linguistic representations were significantly tracked after controlling for acoustic properties. However, phoneme surprisal, cohort entropy, word surprisal, and word frequency were all significantly tracked over and beyond acoustic properties. We also tested the generality of the associated responses by training on one story and testing on another. In general, the linguistic representations are tracked similarly across different stories spoken by different readers. These results suggests that these representations characterize processing of the linguistic content of speech. Significance Statement For clinical applications it would be desirable to develop a neural marker of speech comprehension derived from neural responses to continuous speech. Such a measure would allow for behaviour-free evaluation of speech understanding; this would open doors towards better quantification of speech understanding in populations from whom obtaining behavioral measures may be difficult, such as young children or people with cognitive impairments, to allow better targeted interventions and better fitting of hearing devices.
... beats, measures). The metrical beat in music is an endogenous phenomenon, whereby regular or quasi-regular events give rise to a felt sense of pulse or "tactus" (London, 2012), as well as a sense that these pulses are organized in regularly recurring cycles or measures (Large et al., 2015;Large & Palmer, 2002;Phillips-Silver et al., 2011;Tierney & Kraus, 2015). The distinction between regular event structures with a felt sense of pulse versus irregular event structures with no felt sense of pulse is important, since different neural substrates are involved in the auditory processing of each (Teki et al., 2011). ...
Article
Full-text available
Our perception of the duration of a piece of music is related to its tempo. When listening to music, absolute durations may seem longer as the tempo-the rate of an underlying pulse or beat-increases. Yet, the perception of tempo itself is not absolute. In a study on perceived tempo, participants were able to distinguish between different tempo-shifted versions of the same song (+ 5 beats per minute (BPM)), yet their tempo ratings did not match the actual BPM rates; this finding was called tempo anchoring effect (TAE). In order to gain further insights into the relation between duration and tempo perception in music, the present study investigated the effect of musical tempo on two different duration measures, to see if there is an analog to the TAE in duration perception. Using a repeated-measures design, 32 participants (16 musicians) were randomly presented with instrumental excerpts of Disco songs at the original tempi and in tempo-shifted versions. The tasks were (a) to reproduce the absolute duration of each stimulus (14-20 s), (b) to estimate the absolute duration of the stimuli in seconds, and (c) to rate the perceived tempo. Results show that duration reproductions were longer with faster tempi, yet no such effect was found for duration estimations. Thus, lower-level reproductions were affected by the tempo, but higher-level estimations were not. The tempo-shifted versions showed no effect on both duration measures, suggesting that the tempo difference for the duration-lengthening effect requires a difference of at least 20 BPM, depending on the duration measure. Results of perceived tempo replicated the typical rating pattern of the TAE, but this was not found in duration measures. The roles of spontaneous motor tempo and musical experience are discussed, and implications for future studies are given.
... After correcting for multiple comparisons, we observed higher levels of spectral magnitude at the first harmonic of the beat (2.5 Hz) for bimodal compared to auditory presentations of isochronous rhythm. Spectral magnitude at harmonic frequencies has been interpreted with respect to voluntary attention directed at integer multiples of the beat (Tierney & Kraus, 2014;Nozaradan et al., 2012;Nozaradan et al., 2011). Moreover, humans have a unique ability to readily switch their attention between different beat rates (Repp & Su, 2013). ...
Article
Full-text available
The ability to synchronize movements to a rhythmic stimulus, referred to as sensorimotor synchronization (SMS), is a behavioral measure of beat perception. Although SMS is generally superior when rhythms are presented in the auditory modality, recent research has demonstrated near-equivalent SMS for vibrotactile presentations of isochronous rhythms [Ammirante, P., Patel, A. D., & Russo, F. A. Synchronizing to auditory and tactile metronomes: A test of the auditory–motor enhancement hypothesis. Psychonomic Bulletin & Review, 23, 1882–1890, 2016]. The current study aimed to replicate and extend this study by incorporating a neural measure of beat perception. Nonmusicians were asked to tap to rhythms or to listen passively while EEG data were collected. Rhythmic complexity (isochronous, nonisochronous) and presentation modality (auditory, vibrotactile, bimodal) were fully crossed. Tapping data were consistent with those observed by Ammirante et al. (2016), revealing near-equivalent SMS for isochronous rhythms across modality conditions and a drop-off in SMS for nonisochronous rhythms, especially in the vibrotactile condition. EEG data revealed a greater degree of neural entrainment for isochronous compared to nonisochronous trials as well as for auditory and bimodal compared to vibrotactile trials. These findings led us to three main conclusions. First, isochronous rhythms lead to higher levels of beat perception than nonisochronous rhythms across modalities. Second, beat perception is generally enhanced for auditory presentations of rhythm but still possible under vibrotactile presentation conditions. Finally, exploratory analysis of neural entrainment at harmonic frequencies suggests that beat perception may be enhanced for bimodal presentations of rhythm.
... Using this method, the authors further observed that the neural entrainment to beat and meter-related frequencies are selectively amplified compared to unrelated frequencies in complex rhythms, providing more evidence of the endogenous aspect of beat and meter processing (Nozaradan, Peretz, & Mouraux, 2012). Moreover, using ecologically valid music, it was demonstrated that entrainment to meter, not beat, can be disrupted with conflicting cues, further showing that meter processing may involve higher-level processes (Tierney & Kraus, 2015). Most recently, Li and colleagues examined neural entrainment using simultaneous EEG-fMRI and observed networks that are distinct to beat vs. meter entrainment (Li et al., 2019). ...
Article
Full-text available
Introduction: Music is ubiquitous and powerful in the world's cultures. Music listening involves abundant information processing (e.g., pitch, rhythm) in the central nervous system and can also induce changes in the physiology, such as heart rate and perspiration. Yet, previous studies tended to examine music information processing in the brain separately from physiological changes. In the current study, we focused on the temporal structure of music (i.e., beat and meter) and examined the physiology, neural processing, and, most importantly, the relation between the two areas. Methods: Simultaneous MEG and ECG data were collected from a group of adults (N = 15) while they passively listened to duple and triple rhythmic patterns. To characterize physiology, we measured heart rate variability (HRV), indexing the parasympathetic nervous system function (PSNS). To characterize neural processing of beat and meter, we examined the neural entertainment and calculated the beat-to-meter ratio to index the relation between beat-level and meter-level entrainment. Specifically, the current study investigated three related questions: (a) whether listening to musical rhythms affects HRV; (b) whether the neural beat-to-meter ratio differed between metrical conditions, and (c) whether neural beat-to-meter ratio is related to HRV. Results: Results suggest that while at the group level, both HRV and neural processing are highly similar across metrical conditions, at the individual level, neural beat-to-meter ratio significantly predicts HRV, establishing a neural-physiological link. Conclusion: This observed link is discussed under the theoretical "neurovisceral integration model," and it provides important new perspectives in music cognition and auditory neuroscience research.
... In line with this view, a growing body of evidence suggests that meter perception is related to fluctuations of neural activity time-locked to the perceived metric pulses (Nozaradan et al. , 2012Chemin et al. 2014;Tierney and Kraus 2014;Nozaradan, Mouraux, et al. 2016;Nozaradan, Peretz, et al. 2016;Nozaradan, Schwartze, et al. 2017;Tal et al. 2017; ...
Article
Full-text available
When listening to music, people often perceive and move along with a periodic meter. However, the dynamics of mapping between meter perception and the acoustic cues to meter periodicities in the sensory input remain largely unknown. To capture these dynamics, we recorded the EEG while non-musician and musician participants listened to nonrepeating rhythmic sequences where acoustic cues to meter frequencies either gradually decreased (from regular to degraded) or increased (from degraded to regular). The results revealed greater neural activity selectively elicited at meter frequencies when the sequence gradually changed from regular to degraded compared to the opposite. Importantly, this effect was unlikely to arise from overall gain, or low-level auditory processing, as revealed by physiological modeling. Moreover, the context effect was more pronounced in non-musicians, who also demonstrated facilitated sensory-motor synchronization with the meter for sequences that started as regular. In contrast, musicians showed weaker effects of recent context in their neural responses and robust ability to move along with the meter irrespective of stimulus degradation. Together, our results demonstrate that brain activity elicited by rhythm does not only reflect passive tracking of stimulus features, but represents continuous integration of sensory input with recent context.
... Several studies have tried to shed further light on the link between rhythmic processing and language through neurophysiological and psychophysiological investigations, which have revealed a surprising correspondence between modulation energy along multiple timescales within the speech envelope and modulations in cortical activity during speech processing (Giraud & Poeppel, 2012;Myers, Lense & Gordon, 2019, among others phenomenon, known as neural entrainment, has been described also for music (Tierney & Kraus, 2014;Doelling & Poeppel, 2015) and seems to build the neurological basis for the relationship between music and language. Other studies, focusing on the differences found between musicians and nonmusicians, and on the presence and characteristics of dyslexic individuals in both populations, have further highlighted that many of the advantages shown by musicians are found with nonverbal (rhythmic or musical) as well as with verbal stimuli (Magne, Schön, & Besson, 2006;Weiss, Granot, & Ahissar, 2014;Zuk et al., 2017). ...
Article
Full-text available
Link to read-only full text: https://onlinelibrary.wiley.com/share/author/Q6JDPU7BUZIPTFBYYIYT?target=10.1111/desc.12981 Rhythm perception seems to be crucial to language development. Many studies have shown that children with Developmental Dyslexia and Developmental Language Disorder have difficulties in processing rhythmic structures. In this study, we investigated the relationships between prosody and musical processing in Italian children with typical and atypical development. The tasks aimed to reproduce linguistic prosodic structures through musical sequences, offering a direct comparison between the two domains without violating the specificities of each one. Sixteen Typically Developing children, sixteen children with a diagnosis of Developmental Dyslexia and sixteen with a diagnosis of Developmental Language Disorder (age 10‐13 years) participated in the experimental study. Three tasks were administered: an association task between a sentence and its humming version, a stress discrimination task (between couples of sounds reproducing the intonation of Italian trisyllabic words) and an association task between trisyllabic non‐words with different stress position and three‐notes musical sequences with different musical stress. Children with Developmental Language Disorder perform significantly lower than Typically Developing children on the humming test, By contrast, children with Developmental Dyslexia are significantly slower than TD in associating non‐words with musical sequences. Accuracy and speed in the experimental tests correlate with metaphonological, language and word reading scores. Theoretical and clinical implications are discussed within a multidimensional model of neurodevelopmental disorders including prosodic and rhythmic skills at word and sentence level.
... Accumulating empirical evidence suggests that when presented with an external rhythmic stimulus, neural oscillations in the brain align at multiple frequency levels to this stimulus (Doelling & Poeppel, 2015;Fujioka, Zendel, & Ross, 2010;Giraud & Poeppel, 2012;Harding, Sammler, Henry, Large, & Kotz, 2019;Nozaradan, 2014;Nozaradan, Peretz, Missal, & Mouraux, 2011;Stupacher, Wood, & Witte, 2017;Tierney & Kraus, 2014). However, the underlying cognitive and neural basis of the entrainment of endogenous neural oscillations to exogenous rhythms is still debated (Haegens & Zion Golumbic, 2018;Novembre & Iannetti, 2018;Rimmele, Morillon, Poeppel, & Arnal, 2018;Zoefel, ten Oever, & Sack, 2018). ...
Article
Full-text available
When listening to temporally regular rhythms, most people are able to extract the beat. Evidence suggests that the neural mechanism underlying this ability is the phase alignment of endogenous oscillations to the external stimulus, allowing for the prediction of upcoming events (i.e., dynamic attending). Relatedly, individuals with dyslexia may have deficits in the entrainment of neural oscillations to external stimuli, especially at low frequencies. The current experiment investigated rhythmic processing in adults with dyslexia and matched controls. Regular and irregular rhythms were presented to participants while electroencephalography was recorded. Regular rhythms contained the beat at 2 Hz; while acoustic energy was maximal at 4 Hz and 8 Hz. These stimuli allowed us to investigate whether the brain responds non-linearly to the beat-level of a rhythmic stimulus, and whether beat-based processing differs between dyslexic and control participants. Both groups showed enhanced stimulus-brain coherence for regular compared to irregular rhythms at the frequencies of interest, with an overrepresentation of the beat-level in the brain compared to the acoustic signal. In addition, we found evidence that controls extracted subtle temporal regularities from irregular stimuli, whereas dyslexics did not. Findings are discussed in relation to dynamic attending theory and rhythmic processing deficits in dyslexia.
... T HE TEMPO OF A PIECE OF MUSIC IS RARELY, IF ever, ambiguous. After only a few notes or drum strokes we have a keen sense of whether the music is fast, moderate, or slow, and in that same brief span of time we have entrained to the music's beat (Large, Herrera, & Velasco, 2015;Large & Palmer, 2002;Phillips-Silver et al., 2011;Tierney & Kraus, 2015). Yet the cues for tempo are neither simple nor straightforward. ...
Article
In a study of tempo perception, London, Burger, Thompson, and Toiviainen (2016) presented participants with digitally ‘‘tempo-shifted’’ R&B songs (i.e., sped up or slowed down without otherwise altering their pitch or timbre). They found that while participants’ relative tempo judgments of original versus altered versions were correct, they no longer corresponded to the beat rate of each stimulus. Here we report on three experiments that further probe the relation(s) between beat rate, tempo-shifting, beat salience, melodic structure, and perceived tempo. Experiment 1 is a replication of London et al. (2016) using the original stimuli. Experiment 2 replaces the Motown stimuli with disco music, which has higher beat salience. Experiment 3 uses looped drum patterns, eliminating pitch and other cues from the stimuli and maximizing beat salience. The effect of London et al. (2016) was replicated in Experiment 1, present to a lesser degree in Experiment 2, and absent in Experiment 3. Experiments 2 and 3 also found that participants were able to make tempo judgments in accordance with BPM rates for stimuli that were not tempo-shifted. The roles of beat salience, melodic structure, and memory for tempo are discussed, and the TAE as an example of perceptual sharpening is considered.
... When listening to music we can often feel the urge to move along with the beat. It has been shown that during this time not only our body, but also our brain rhythms synchronize with the music (Large and Kolen, 1994;Tierney and Kraus, 2014b). Brain rhythms arise due to the simultaneous firing of neural populations (Buzsaki, 2006), and the frequency of these oscillations is related to different perceptual, motor and cognitive processes (Buzsáki and Draguhn, 2004;MacKay, 1997;Ward, 2003). ...
Article
Entrainment to periodic acoustic stimuli has been found to relate both to the auditory and motor cortices, and it could be influenced by the maturity of these brain regions. However, existing research in this topic provides data about different oscillatory brain activities in different age groups with different musical background. In order to obtain a more coherent picture and examine early manifestations of entrainment, we assessed brain oscillations at multiple time scales (beta: 15–25 Hz, gamma: 28–48 Hz)and in steady state evoked potentials (SS-EPs in short)in 6–7-year-old children with no musical background right at the start of primary school before they learnt to read. Our goal was to exclude the effect of music training and reading, since previous studies have shown that sensorimotor entrainment (movement synchronization to the beat)is related to musical and reading abilities. We found evidence for endogenous anticipatory processing in the gamma band related to meter perception, and stimulus-related frequency specific responses. However, we did not find evidence for an interaction between auditory and motor networks, which suggests that endogenous mechanisms related to auditory processing may mature earlier than those that underlie motor actions, such as sensorimotor synchronization.
... Several developmental studies have been conducted investigating the effect of learning how to sing or to play a musical instrument over a period of time. Using expert and intervention designs, results have shown benefits on different types of cognitive processes such as verbal processing (Moreno et al., 2009;Seither-Preisler et al., 2014), intelligence (Schellenberg, 2004), reading (Moreno et al., 2011), inhibitory processing (Moreno et al., 2011), auditory processing (Moreno and Farzan, 2015;Tierney and Kraus, 2015) and on general brain development (Hyde et al., 2009). However, evidence to date demonstrating benefits of musical training in older adults has often been correlational, not causal. ...
Article
Full-text available
Cognitive decline is an unavoidable aspect of aging that impacts important behavioral and cognitive skills. Training programs can improve cognition, yet precise characterization of the psychological and neural underpinnings supporting different training programs is lacking. Here, we assessed the effect and maintenance (3-month follow-up) of 3-month music and visual art training programs on neuroelectric brain activity in older adults using a partially randomized intervention design. During the pre-, post-, and follow-up test sessions, participants completed a brief neuropsychological assessment. High-density EEG was measured while participants were presented with auditory oddball paradigms (piano tones, vowels) and during a visual GoNoGo task. Neither training program significantly impacted psychometric measures, compared to a non-active control group. However, participants enrolled in the music and visual art training programs showed enhancement of auditory evoked responses to piano tones that persisted for up to 3 months after training ended, suggesting robust and long-lasting neuroplastic effects. Both music and visual art training also modulated visual processing during the GoNoGo task, although these training effects were relatively short-lived and disappeared by the 3-month follow-up. Notably, participants enrolled in the visual art training showed greater changes in visual evoked response (i.e., N1 wave) amplitude distribution than those from the music or control group. Conversely, those enrolled in music showed greater response associated with inhibitory control over the right frontal scalp areas than those in the visual art group. Our findings reveal a causal relationship between art training (music and visual art) and neuroplastic changes in sensory systems, with some of the neuroplastic changes being specific to the training regimen.
... For instance, an enhanced music rhythm perception has been found for those who have mastered an L2 (Roncaglia-Denissen, Roor, Chen, & Sadakata, 2016); this is particularly true when the L2 is rhythmically different relatively to the L1, an effect that cannot be simply explained by exposure to more complex musical rhythms (as in Turkish L1 speakers; Roncaglia-Denissen et al., 2016). This relationship between musical rhythm and speech perception could rely on shared cognitive functions (such as working memory), as well as on the ability to entrain to multiplexed temporal scales, from the millisecond to second levels (Tierney & Kraus, 2015;Doelling & Poeppel, 2015;Schön & Tillmann, 2015). Future studies could implement a longitudinal design to demonstrate a causal effect of specific features of musical training (here, rhythm) on other features of L2 acquisition (here, accent placement). ...
Article
Full-text available
While many studies have demonstrated the relationship between musical rhythm and speech prosody, this has been rarely addressed in the context of second language (L2) acquisition. Here, we investigated whether musical rhythmic skills and the production of L2 speech prosody are predictive of one another. We tested both musical and linguistic rhythmic competences of 23 native French speakers of L2 English. Participants completed perception and production music and language tests. In the prosody production test, sentences containing trisyllabic words with either a prominence on the first or on the second syllable were heard and had to be reproduced. Participants were less accurate in reproducing penultimate accent placement. Moreover, the accuracy in reproducing phonologically disfavored stress patterns was best predicted by rhythm production abilities. Our results show, for the first time, that better reproduction of musical rhythmic sequences is predictive of a more successful realization of unfamiliar L2 prosody, specifically in terms of stress-accent placement.
... Comparatively little, therefore, is known about the neural mechanisms underlying more general (non-verbal) selective auditory attention. Neural entrainment is not limited to speech stimuli, and has been demonstrated to simple abstract sounds (32)(33), non-verbal rhythms (34), and ecologically valid music (35)(36)(37)(38) as well. Research on non-human primates in which non-verbal stimulus streams are presented at different time points has shown that switching attention between streams modulates neural entrainment (39)(40)(41)(42). ...
Preprint
How does the brain follow a sound that is mixed with others in a noisy environment? A possible strategy is to allocate attention to task-relevant time intervals while suppressing irrelevant intervals - a strategy that could be implemented by aligning neural modulations with critical moments in time. Here we tested whether selective attention to non-verbal sound streams is linked to shifts in the timing of attentional modulations of EEG activity, and investigated whether this neural mechanism can be enhanced by short-term training and musical experience. Participants performed a memory task on a target auditory stream presented at 4 Hz while ignoring a distractor auditory stream also presented at 4 Hz, but with a 180-degree shift in phase. The two attention conditions were linked to a roughly 180-degree shift in phase in the EEG signal at 4 Hz. Moreover, there was a strong relationship between performance on the 1-back task and the timing of the EEG modulation with respect to the attended band. EEG modulation timing was also enhanced after several days of training on the selective attention task and enhanced in experienced musicians. These results support the hypothesis that modulation of neural timing facilitates attention to particular moments in time and indicate that phase timing is a robust and reliable marker of individual differences in auditory attention. Moreover, these results suggest that nonverbal selective attention can be enhanced in the short term by only a few hours of practice and in the long term by years of musical training.
... After correcting for multiple comparisons, we observed higher levels of spectral magnitude at the first harmonic of the beat (2.5 Hz) for bimodal compared to auditory presentations of isochronous rhythm. Spectral magnitude at harmonic frequencies has been interpreted with respect to voluntary attention directed at integer multiples of the beat (Tierney & Kraus, 2014;Nozaradan et al., 2012;Nozaradan et al., 2011). Moreover, humans have a unique ability to readily switch their attention between different beat rates (Repp & Su, 2013). ...
Article
Full-text available
Musical rhythms elicits a perception of a beat (or pulse) which in turn elicit spontaneous motor synchronization (Repp & Su, 2013). Electroencephalography (EEG) research has shown that endogenous neural oscillations dynamically entrain to beat frequencies of musical rhythms providing a neurological marker for beat perception(Nozaradan, Peretz, Missal, & Mouraux, 2011). Rhythms however, vary on in complexity modulating ability to synchronize motor movement. Although musical rhythms are assumed to be from auditory sources, recent research suggests that rhythms presented through vibro-tactile stimulation of the spine elicits a comparable capacity as auditory in motor synchronization in simpler rhythms, however, this trend diminishes as complexity increases(Ammirante, Patel, & Russo, 2016). The current research purposes to explore the neural correlates of vibro-tactile beat perception with the aim in providing further evidence for rhythm perception from a vibro-tactile modality. Participants will be passively exposed to simple and complex rhythms from auditory, vibro-tactile, and multi-modal sources. Synchronization ability as well as EEG recording will be obtained in order to provide behavioural and neurological indexes of beat perception. Results from this research will provide evidence for non-auditory, vibro-tactile capabilities of music perception.
... In addition to help defining the scope of meter-strength effects, this approach taps into the association vs. dissociation between beat and meter. Although beat and meter are often viewed as indissociable periodical phenomena that only differ in time scale (Nozaradan, Peretz, & Mouraux, 2012;Tierney & Kraus, 2014), evolutionary (Fitch, 2013) and neuroanatomical evidence (Thaut, Trimarchi, & Parsons, 2014) points to dissociation-meter may not be merely a "supra-beat", or an extension of beat. Here is how we tested for the dissociation hypothesis: It is known that modality affects beat processing, in that beat-based auditory temporal patterns are easier to learn than visual ones (Pasinski, McAuley, & Snyder, 2016;Repp & Penel, 2002;Silva & Castro, 2016). ...
Article
Both dance and music performers must learn timing patterns (temporal learning, or “when”) along with series of different movements (ordinal learning, or “what”). It has been suggested that the organization of temporal events into regular beat cycles (meter strength) may enhance both temporal and ordinal learning, but empirical evidence is mixed and incomplete. In the present study, we examined meter-strength effects on the concurrent temporal and ordinal learning of sequences. Meter strength enhanced ordinal learning (“what”) when the concurrent temporal learning was incidental, but it had no effects on temporal learning itself (“when”). Our findings provide guidelines for dance and music teaching, as well as rhythm-based neurological rehabilitation. © 2018, © 2018 The Author(s). This open access article is distributed under a Creative Commons Attribution (CC-BY) 4.0 license.
... However, the primary feature of rhythm that is typically the focus of entrainment is the beat, or fundamental period of the repeated stimuli. Rhythm processing and entrainment are tightly intertwined; neural resonance theory hypothesizes that the perception of a beat is in itself a result of entrainment (Tierney & Kraus, 2015). Psychophysical studies have shown that motor entrainment to auditory rhythm is almost instantaneous without necessary learning periods Stephan et al., 2002). ...
... This approach proposes a bank of non-linear neural oscillators, which move in synchrony to the beat of an external signal (Jones & Boltz, 1989;Large & Jones, 1999;Large & Snyder, 2009). Neurophysiological evidence supports neural beat alignment with simple stimuli (Fujioka, Zendel, & Ross, 2010;Iversen, Repp, & Patel, 2009) and with ecologically valid music (a pop song; Tierney & Kraus, 2014b) and offers an explanation for enhanced phonological abilities in musicians whose training emphasizes precise timing skills (Tierney & Kraus, 2014a). ...
Article
Will listening to music on the radio change the way you or your children speak? Comparisons are often drawn between the domains of music and language. Temporal processing is one general mechanism that influences both domains; however, a cross-domain influence of rate priming has not yet been established between music and speech. The current research examines if the timing in one modality (music) affects the production timing in a different modality (language) for both adults (Experiment 1) and preschool children (Experiment 2). Participants listened to short unfamiliar musical melodies presented at either a fast or slow rate, and then described pictures aloud. Results demonstrate that both adults’ and children's language production was influenced by the timing of the music domain; faster musical primes led to faster speech production. These findings support domain general temporal processing since musical timing affects linguistic timing even when the music has no linguistic component.
... Such internal musical representations have long been proposed (e.g., Longuet-Higgins & Lee, 1982;Palmer & Krumhansl, 1990) and may be at work in the present results. Further, our finding of a more pronounced effect for shorter loops implicates temporal windows of early auditory system processing (for review, see Haegens and Zion Golumbic, 2017) which are well-known for both speech (Ding et al., 2017;Teng, Tian, & Poeppel, 2016), music (Doelling & Poeppel, 2015), and, specifically, musical rhythm (Large & Snyder 2009;Nozaradan 2014;Tierney & Kraus, 2015). ...
Article
While many techniques are known to music creators, the technique of repetition is one of the most commonly deployed. The mechanism by which repetition is effective as a music-making tool, however, is unknown. Building on the speech-to-song illusion (Deutsch, Henthorn, & Lapidis in Journal of the Acoustical Society of America, 129(4), 2245–2252, 2011), we explore a phenomenon in which perception of musical attributes are elicited from repeated, or “looped,” auditory material usually perceived as nonmusical such as speech and environmental sounds. We assessed whether this effect holds true for speech stimuli of different lengths; nonspeech sounds (water dripping); and speech signals decomposed into their rhythmic and spectral components. Participants listened to looped stimuli (from 700 to 4,000 ms) and provided continuous as well as discrete perceptual ratings. We show that the regularizing effect of repetition generalizes to nonspeech auditory material and is strongest for shorter clip lengths in the speech and environmental cases. We also find that deconstructed pitch and rhythmic speech components independently elicit a regularizing effect, though the effect across segment duration is different than that for intact speech and environmental sounds. Taken together, these experiments suggest repetition may invoke active internal mechanisms that bias perception toward musical structure.
Article
Full-text available
This work is the second part of the article “From Continuity to Rhythm” (Acta Semiotica, II, 3, 2022), where I set out to define rhythm as a semiotic concept. The cornerstones for such a definition are Victoria Santa Cruz’s concept of Rhythm as an active and continuous principle of integration, Landowski’s understanding of continuity, Leibniz’s notion of Harmony, the physical concept of entrainment, and the formal definition of feedback from engineering as a framework to understand manipulation, adjustment and entrainment. The main contributions are : 1) analysing the regimes of adjustment and manipulation in terms of feedback, 2) establishing a link between manipulation and adjustment, showing that the former is a simplification of the latter, 3) providing a semiotic definition of entrainment that considers five different dimensions, expressing manipulation and adjustment as specific modes of entrainment, 4) providing a semiotic definition of Rhythm as a dynamic unification process departing from Leibniz’s notion of Harmony, 5) questioning Leibniz’s Harmony in favor of an active view of Rhythm that not only produces order or common properties, but that actively strives to maintain them in virtue of feedback.
Preprint
Speech and music signals show rhythmicity in their temporal structure with slower rhythmic rates in music than in speech. Speech processing has been related to brain rhythms in the auditory and motor cortex at around 4.5 Hz, while music processing has been associated to motor cortex activity at around 2 Hz reflecting the temporal structures in speech and music. In addition, slow motor cortex brain rhythms were suggested to be central for timing in both domains. It thus remains unclear if domain-general or frequency specific mechanisms are driving speech and music processing. Additionally, for speech processing, auditory-motor cortex coupling and perception-production synchronization at 4.5 Hz have been related to enhanced auditory perception in various tasks. However, it is unknown whether this effect generalizes to synchronization and perception in music at distinct optimal rates. Using a behavioral protocol, we investigate whether (1) perception-production synchronization shows distinct optimal rates for speech and music; (2) optimal rates in perception are predicted by synchronization strength at different time scales. A perception task involving speech and music stimuli and a synchronization task using tapping and whispering were conducted at slow (~2 Hz) and fast rates (~4.5 Hz). Results revealed that synchronization was generally better at slow rates. Importantly, for slow but not for fast rates, tapping showed superior performance when compared to whispering, suggesting domain-specific rate preferences. Accordingly, synchronization performance was highly correlated across domains only at fast but not at slow rates. Altogether, perception of speech and music were optimal at different timescales, and predicted by auditory-motor synchronization strength. Our data suggests different optimal time scales for music and speech processing with partially overlapping mechanisms.
Article
Full-text available
A consistent relationship has been found between rhythmic processing and reading skills. Impairment of the ability to entrain movements to an auditory rhythm in clinical populations with language-related deficits, such as children with developmental dyslexia, has been found in both behavioral and neural studies. In this study, we explored the relationship between rhythmic entrainment, behavioral synchronization, reading fluency, and reading comprehension in neurotypical English- and Mandarin-speaking adults. First, we examined entrainment stability by asking participants to coordinate taps with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Next, we assessed behavioral synchronization by asking participants to coordinate taps with the syllables they produced while reading sentences as naturally as possible (tap to syllable task). Finally, we measured reading fluency and reading comprehension for native English and native Mandarin speakers. Stability of entrainment correlated strongly with tap to syllable task performance and with reading fluency, and both findings generalized across English and Mandarin speakers.
Article
Full-text available
A ritmus nemcsak a zenében értelmezhető, az időbeliség, a rendszeresség szerepet kap az emberi élet szinte minden területén. Az akusztikai mintázatokban rejlő rendszerrel való első találkozásunk az anya szívdobbanásainak köszönhetően a magzati lét idejére tehető. A gyermek fejlődése szempontjából különösen fontos a ritmikai elemeket is magában foglaló mozgás és játék, melyek egyúttal örömet is okoznak. Az iskolai ének-zene órákon folytatott játékos ritmikai gyakorlatok – és az ezekkel együtt járó zenei, pszichomotoros és általános kognitív fejlődés – tehát nemcsak a személyiség fejlődése szempontjából fontos készségek fejlesztését segíthetik, hanem örömteli zenei tevékenységet is jelenthetnek a tanulók számára. Ugyanakkor tapasztalatunk alapján a hazai ének-zene oktatás gyakorlata, módszertana a ritmusjátékokkal való összevetés alapján nagyobb hangsúlyt helyez az énekes zenei tevékenységekre, inkább éneklésközpontú. Szintén az iskolai ének-zene órákon nyílhat lehetőség arra is, hogy a tanulók a zenét és a zenei tevékenységeket megkedveljék, azonban a korábbi vizsgálatok alapján az ének-zene órák nem tartoznak a kedvelt tanórák közé. Kutatásunk célja ezért főként az volt, hogy olyan ritmikai fejlesztő módszereket dolgozzunk ki, amelyek élvezetesek, a ritmikai készségek játékos fejlesztésére irányulnak, ugyanakkor iskolai környezetben egyszerűen és hatékonyan alkalmazhatók. A tanulmányban bemutatott ritmikai fejlesztőprogram könnyen beilleszthető a tantervbe, fejlesztési periódusokra, témakörökre, nehézségi szintekre tagolódik, és a gyakorlati alkalmazáshoz szükséges információkat, feladatokat és módszertani javaslatokat tartalmaz. A program első osztályos tanulókkal folytatott hatásvizsgálatának eredményét – miszerint a változatos ritmusjátékok mind a ritmikai készségek fejlődésére, mind az énekzene tantárgyi attitűdre pozitív hatást gyakorolnak – részletesen ismertetjük.
Thesis
Certaines habiletés temporelles sont innées, telles que la capacité de produire des mouvements rythmiques, ou encore la possibilité d’apprendre une durée de façon implicite. Cependant, la conscience du temps ou le jugement explicite des durées s’acquiert lentement au cours du développement. Ainsi la question qui se pose est : Comment faisons-nous, pour acquérir ces capacités de jugement temporel ? L’une des hypothèses que l’on peut formuler à partir des travaux menés chez le jeune enfant, c’est que nous apprenons ce qu’est le temps en faisant l’expérience répétée de son écoulement à travers la durée de nos actions. Les récentes études en imagerie, qui ont montré de façon consistante une implication des aires motrices lors de la perception des durées chez l’adulte, plaident en faveur de l’hypothèse d’un rôle important de l’action dans la construction des jugements temporels. C’est pourquoi, dans notre travail de thèse, nous avons mené une série d’études permettant d’explorer de façon plus précise le rôle de l’action dans les activités temporelles et l’évolution de ce rôle au cours du développement. Nous avons examiné le rôle respectif du développement de la motricité et des fonctions cognitives dans les activités rythmiques, puis nous avons testé l’influence d’un apprentissage sensorimoteur par rapport à un apprentissage visuel des rythmes sur différentes tâches temporelles, motrices et perceptives. Pour finir, nous avons examiné l’influence de la tendance d’action, face à une menace, sur le traitement des intervalles temporels. Nos travaux ont confirmé la forte l’influence des capacités motrices sur les performances rythmiques, en particulier chez les enfants. De plus, nous avons montré que l’action bénéfice aux enfants dans des tâches temporelles motrices mais aussi perceptives, et cela d’autant plus qu’ils sont jeunes. De plus, l’action bénéficie aux adultes, mais seulement lorsque la tâche est difficile. Enfin, le besoin d’agir rapidement modifie le traitement des intervalles temporels. Ainsi il semble bien que l’action et le système moteur jouent un rôle tout particulier dans les traitements temporels et leur développement notamment en facilitant l’apprentissage des durées. Le temps est donc d’abord « agi » avant d’être « représenté », conceptualisé.
Article
Full-text available
When listening to speech, our brain responses time lock to acoustic events in the stimulus. Recent studies have also reported that cortical responses track linguistic representations of speech. However, tracking of these representations is often described without controlling for acoustic properties. Therefore, the response to these linguistic representations might reflect unaccounted acoustic processing rather than language processing. Here, we evaluated the potential of several recently proposed linguistic representations as neural markers of speech comprehension. To do so, we investigated EEG responses to audiobook speech of 29 participants (22 females). We examined whether these representations contribute unique information over and beyond acoustic neural tracking and each other. Indeed, not all of these linguistic representations were significantly tracked after controlling for acoustic properties. However, phoneme surprisal, cohort entropy, word surprisal, and word frequency were all significantly tracked over and beyond acoustic properties. We also tested the generality of the associated responses by training on one story and testing on another. In general, the linguistic representations are tracked similarly across different stories spoken by different readers. These results suggests that these representations characterize the processing of the linguistic content of speech. SIGNIFICANCE STATEMENT For clinical applications, it would be desirable to develop a neural marker of speech comprehension derived from neural responses to continuous speech. Such a measure would allow for behavior-free evaluation of speech understanding; this would open doors toward better quantification of speech understanding in populations from whom obtaining behavioral measures may be difficult, such as young children or people with cognitive impairments, to allow better targeted interventions and better fitting of hearing devices.
Article
Full-text available
Interpersonal synchrony refers to the temporal coordination of actions between individuals and is a common feature of social behaviors, from team sport to ensemble music performance. Interpersonal synchrony of many rhythmic (periodic) behaviors displays dynamics of coupled biological oscillators. The current study addresses oscillatory dynamics on the levels of brain and behavior between music duet partners performing at spontaneous (uncued) rates. Wireless EEG was measured from N = 20 pairs of pianists as they performed a melody first in Solo performance (at their spontaneous rate of performance), and then in Duet performances at each partner’s spontaneous rate. Influences of partners’ spontaneous rates on interpersonal synchrony were assessed by correlating differences in partners’ spontaneous rates of Solo performance with Duet tone onset asynchronies. Coupling between partners’ neural oscillations was assessed by correlating amplitude envelope fluctuations of cortical oscillations at the Duet performance frequency between observed partners and between surrogate (re-paired) partners, who performed the same melody but at different times. Duet synchronization was influenced by partners’ spontaneous rates in Solo performance. The size and direction of the difference in partners’ spontaneous rates were mirrored in the size and direction of the Duet asynchronies. Moreover, observed Duet partners showed greater inter-brain correlations of oscillatory amplitude fluctuations than did surrogate partners, suggesting that performing in synchrony with a musical partner is reflected in coupled cortical dynamics at the performance frequency. The current study provides evidence that dynamics of oscillator coupling are reflected in both behavioral and neural measures of temporal coordination during musical joint action.
Article
The present research explored the influence of isochronous auditory rhythms on the timing of movement-related prediction in two experiments. In both experiments, participants observed a moving disc that was visible for a predetermined period before disappearing behind a small, medium, or large occluded area for the remainder of its movement. In Experiment 1, the disc was visible for 1 s. During this period, participants were exposed to either a fast or slow auditory rhythm, or they heard nothing. They were instructed to press a key to indicate when they believed the moving disc had reached a specified location on the other side of the occluded area. The procedure measured the (signed) error in participants’ estimate of the time it would take for a moving object to contact a stationary one. The principal results of Experiment 1 were main effects of the rate of the auditory rhythm and of the size of the occlusion on participants’ judgments. In Experiment 2, the period of visibility was varied with size of the occlusion area to keep the total movement time constant for all three levels of occlusion. The results replicated the main effect of rhythm found in Experiment 1 and showed a small, significant interaction, but indicated no main effect of occlusion size. Overall, the results indicate that exposure to fast isochronous auditory rhythms during an interval of inferred motion can influence the imagined rate of such motion and suggest a possible role of an internal rhythmicity in the maintenance of temporally accurate dynamic mental representations.
Chapter
Synopsis Auditory entrainment is a fundamental neural mechanism subserving auditory perception. It has been deployed to study a wide range of problems in temporal processing. We examine the concept of entrainment, compare it to similar phenomena, and then take a systemic view, examining the functional roles of auditory entrainment at the cortical level. We aim to chart future directions of research, hoping to stimulate a transition from using the auditory entrainment as a toolkit to understanding its computations and functions.
Article
Full-text available
How does the brain follow a sound that is mixed with others in a noisy environment? One possible strategy is to allocate attention to task-relevant time intervals. Prior work has linked auditory selective attention to alignment of neural modulations with stimulus temporal structure. However, since this prior research used relatively easy tasks and focused on analysis of main effects of attention across participants, relatively little is known about the neural foundations of individual differences in auditory selective attention. Here we investigated individual differences in auditory selective attention by asking participants to perform a 1-back task on a target auditory stream while ignoring a distractor auditory stream presented 180° out of phase. Neural entrainment to the attended auditory stream was strongly linked to individual differences in task performance. Some variability in performance was accounted for by degree of musical training, suggesting a link between long-term auditory experience and auditory selective attention. To investigate whether short-term improvements in auditory selective attention are possible, we gave participants 2 h of auditory selective attention training and found improvements in both task performance and enhancements of the effects of attention on neural phase angle. Our results suggest that although there exist large individual differences in auditory selective attention and attentional modulation of neural phase angle, this skill improves after a small amount of targeted training.
Article
Meter serves as a robust temporal referent for the creation and perception of musical rhythm. In music from Africa and the diaspora, a parallel referent is often present in the form of repetitive rhythmic patterns known as timelines. This paper examines how a well-known timeline (the standard pattern) serves as a grounding framework for quinto (lead conga drum) rhythms heard in different drumming performances of Afro-Cuban rumba columbia. Focusing on the layout of alignment points between the constituent elements of the various temporal layers—rhythm, timeline, and possible meters—indicates that the quinto players may be orienting their playing according to the timeline's onsets.
Article
Selective attention plays a key role in determining what aspects of our environment are encoded into long-term memory. Auditory rhythms with a regular beat provide temporal expectations that entrain attention and facilitate perception of visual stimuli aligned with the beat. The current study investigated whether entrainment to background auditory rhythms also facilitates higher-level cognitive functions such as episodic memory. In a series of experiments, we manipulated temporal attention through the use of rhythmic, instrumental music. In Experiment 1A and 1B, we found that background musical rhythm influenced the encoding of visual targets into memory, evident in enhanced subsequent memory for targets that appeared in-synchrony compared to out-of-synchrony with the background beat. Response times at encoding did not differ for in-synchrony compared to out-of-synchrony stimuli, suggesting that the rhythmic modulation of memory does not simply reflect rhythmic effects on perception and action. Experiment 2 investigated whether rhythmic effects on response times emerge when task procedures more closely match prior studies that have demonstrated significant auditory entrainment effects. Responses were faster for in-synchrony compared to out-of-synchrony stimuli when participants performed a more perceptually-oriented task that did not contain intervening recognition memory tests, suggesting that rhythmic effects on perception and action depend on the nature of the task demands. Together, these results support the hypothesis that rhythmic temporal regularities provided by background music can entrain attention and influence the encoding of visual stimuli into memory.
Article
“Timelessness” is an area of intense interest for many composers and authors interested in 20th- and 21st-century music, but it is not always clear exactly what the term denotes. In particular, the distinction between the induction of timelessness (the listener’s subjective experience of time is altered or suspended by music) and the perception of timelessness (the listener recognizes that the music expresses altered or suspended time) has yet to be clarified. This paper argues that, while experiences of timelessness may be induced by a wide variety of musics and are not necessarily contingent on specific musical qualities, the perception of musical timelessness involves relationships between music’s temporal organization and the temporal structure of auditory perception. Of particular interest are segmentation, sequence, pulse, meter, and repetition. Music whose temporal organization optimizes human information processing and embodiment expresses “human time,” and music whose temporal organization subverts or exceeds human information processing and embodiment points outside of human time, to timelessness. This hypothesis is illustrated with examples from the 20th-century repertoire by Truax, Ligeti, Crumb, Reich, Tenney, Messiaen, and Grisey, music that has been associated with timelessness.
Article
Full-text available
A number of phenomena related to the perception of isochronous tone sequences peak at a certain rate (or tempo) and taper off at both slower and faster rates. In the present paper we start from the hypothesis that the peaking finds its origin in the presence of a damped resonating oscillator in the perceptual-motor system. We assume that for pulse perception only the 'effective' resonance curve matters, i.e., the enhancement of the amplitude of the oscillator beyond the critical damping. On the basis of the effective resonance curve, analyses have been made of data of Vos (1973) on subjective rhythmization and of data on tapping along isochronous tone sequences (Parncutt, 1994) and polyrhythmic sequences (Handel & Oshinsky, 1981). The results show that these data can be very well approximated with the proposed model. The best results are obtained with a resonance period of 500-550 ms and a width at half height of about 400-800 ms. A comparison is made with a number of other tempo related phenomena. In the second part a preliminary effort is made to determine the distribution of perceived tempi of musical pieces heard on the radio and in recordings of several styles, by having a number of listeners tapping along these pieces. The resonance curve appears to be a good tool to characterize these distributions.
Article
Full-text available
The auditory steady state potentials may be an important technique in objective audiometry. The effects of stimulus rate, intensity, and tonal frequency on these potentials were investigated using both signal averaging and on-line Fourier analysis. Stimulus presentation rates of 40 to 45/sec result in a 40 Hz sinusoidal response which is about twice the amplitude of the 10 and 60/sec responses. No significant effects of subject age or sex were seen. The 40/sec response shows a linear decrease in amplitude and a linear increase in latency when stimulus intensity is decreased from 90 to 20 dB normal hearing level. This response is recordable to within a few decibels of behavioral threshold. Stimuli of different tonal frequency give similar amplitude/rate functions, with absolute amplitude decreasing with increasing tonal frequency. Signal averaging and Fourier analysis provide nearly identical amplitude/rate, amplitude/intensity, and latency/intensity functions. Both methods of analysis may be used, therefore, to record the 40 Hz steady state potential. Fourier analysis, however, may be the faster and less expensive method. Furthermore, techniques ("zoom") are available with Fourier analysis to study the effects of varying stimulus parameters on-line with the Fourier analysis procedure.
Article
Full-text available
Beat and meter induction are considered important structuring mechanisms underlying the perception of rhythm. Meter comprises two or more levels of hierarchically ordered regular beats with different periodicities. When listening to music, adult listeners weight events within a measure in a hierarchical manner. We tested if listeners without advanced music training form such hierarchical representations for a rhythmical sound sequence under different attention conditions (Attend, Unattend, and Passive). Participants detected occasional weakly and strongly syncopated rhythmic patterns within the context of a strictly metrical rhythmical sound sequence. Detection performance was better and faster when syncopation occurred in a metrically strong as compared to a metrically weaker position. Compatible electrophysiological differences (earlier and higher-amplitude MMN responses) were obtained when participants did not attend the rhythmical sound sequences. These data indicate that hierarchical representations for rhythmical sound sequences are formed preattentively in the human auditory system.
Article
Full-text available
The tracking of rhythmic structure is a vital component of speech and music perception. It is known that sequences of identical sounds can give rise to the percept of alternating strong and weak sounds, and that this percept is linked to enhanced cortical and oscillatory responses. The neural correlates of the perception of rhythm elicited by ecologically valid, complex stimuli, however, remain unexplored. Here we report the effects of a stimulus' alignment with the beat on the brain's processing of sound. Human subjects listened to short popular music pieces while simultaneously hearing a target sound. Cortical and brainstem electrophysiological onset responses to the sound were enhanced when it was presented on the beat of the music, as opposed to shifted away from it. Moreover, the size of the effect of alignment with the beat on the cortical response correlated strongly with the ability to tap to a beat, suggesting that the ability to synchronize to the beat of simple isochronous stimuli and the ability to track the beat of complex, ecologically valid stimuli may rely on overlapping neural resources. These results suggest that the perception of musical rhythm may have robust effects on processing throughout the auditory system.
Article
Full-text available
Here we present two experiments investigating the implicit orienting of attention over time by entrainment to an auditory rhythmic stimulus. In the first experiment, participants carried out a detection and discrimination tasks with auditory and visual targets while listening to an isochronous, auditory sequence, which acted as the entraining stimulus. For the second experiment, we used musical extracts as entraining stimulus, and tested the resulting strength of entrainment with a visual discrimination task. Both experiments used reaction times as a dependent variable. By manipulating the appearance of targets across four selected metrical positions of the auditory entraining stimulus we were able to observe how entraining to a rhythm modulates behavioural responses. That our results were independent of modality gives a new insight into cross-modal interactions between auditory and visual modalities in the context of dynamic attending to auditory temporal structure.
Article
Full-text available
Fundamental to the experience of music, beat and meter perception refers to the perception of periodicities while listening to music occurring within the frequency range of musical tempo. Here, we explored the spontaneous building of beat and meter hypothesized to emerge from the selective entrainment of neuronal populations at beat and meter frequencies. The electroencephalogram (EEG) was recorded while human participants listened to rhythms consisting of short sounds alternating with silences to induce a spontaneous perception of beat and meter. We found that the rhythmic stimuli elicited multiple steady state-evoked potentials (SS-EPs) observed in the EEG spectrum at frequencies corresponding to the rhythmic pattern envelope. Most importantly, the amplitude of the SS-EPs obtained at beat and meter frequencies were selectively enhanced even though the acoustic energy was not necessarily predominant at these frequencies. Furthermore, accelerating the tempo of the rhythmic stimuli so as to move away from the range of frequencies at which beats are usually perceived impaired the selective enhancement of SS-EPs at these frequencies. The observation that beat- and meter-related SS-EPs are selectively enhanced at frequencies compatible with beat and meter perception indicates that these responses do not merely reflect the physical structure of the sound envelope but, instead, reflect the spontaneous emergence of an internal representation of beat, possibly through a mechanism of selective neuronal entrainment within a resonance frequency range. Taken together, these results suggest that musical rhythms constitute a unique context to gain insight on general mechanisms of entrainment, from the neuronal level to individual level.
Article
Full-text available
Oscillatory activity in sensory cortices reflects changes in local excitation-inhibition balance, and recent work suggests that phase signatures of ongoing oscillations predict the perceptual detection of subsequent stimuli. Low-frequency oscillations are also entrained by dynamic natural scenes, suggesting that the chance of detecting a brief target depends on the relative timing of this to the entrained rhythm. We tested this hypothesis in humans by implementing a cocktail-party-like scenario requiring subjects to detect a target embedded in a cacophony of background sounds. Using EEG to measure auditory cortical oscillations, we find that the chance of target detection systematically depends on both power and phase of theta-band (2-6 Hz) but not alpha-band (8-12 Hz) oscillations before target. Detection rates were higher and responses faster when oscillatory power was low and both detection rate and response speed were modulated by phase. Intriguingly, the phase dependency was stronger for miss than for hit trials, suggesting that phase has a inhibiting but not ensuring role for detection. Entrainment of theta range oscillations prominently occurs during the processing of attended complex stimuli, such as vocalizations and speech. Our results demonstrate that this entrainment to attended sensory environments may have negative effects on the detection of individual tokens within the environment, and they support the notion that specific phase ranges of cortical oscillations act as gatekeepers for perception.
Article
Full-text available
The ability to perceive a musical beat (and move in synchrony with it) seems widespread, but we currently lack normative data on the distribution of this ability in musically untrained individuals. To aid in the survey of beat processing abilities in the general population, as well as to attempt to identify and differentiate impairments in beat processing, we have developed a psychophysical test called the Beat Alignment Test (BAT). The BAT is intended to complement existing tests of rhythm processing by directly examining beat perception in isolation from beat synchronization. The goals of the BAT are 1) to study the distribution of beat-based processing abilities in the normal population and 2) to provide a way to search for "rhythm deaf" individuals, who have trouble with beat processing in music though they are not tone deaf. The BAT is easily implemented and it is our hope that it is widely adopted. Data from a pilot study of 30 individuals is presented.
Article
Full-text available
Our perception of time is affected by the modality in which it is conveyed. Moreover, certain temporal phenomena appear to exist in only one modality. The perception of temporal regularity or structure (e.g., the 'beat') in rhythmic patterns is one such phenomenon: visual beat perception is rare. The modality-specificity for beat perception is puzzling, as the durations that comprise rhythmic patterns are much longer than the limits of visual temporal resolution. Moreover, the optimization that beat perception provides for memory of auditory sequences should be equally relevant to visual sequences. Why does beat perception appear to be modality specific? One possibility is that the nature of the visual stimulus plays a role. Previous studies have usually used brief stimuli (e.g., light flashes) to present visual rhythms. In the current study, a rotating line that appeared sequentially in different spatial orientations was used to present a visual rhythm. Discrimination accuracy for visual rhythms and auditory rhythms was compared for different types of rhythms. The rhythms either had a regular temporal structure that previously has been shown to induce beat perception in the auditory modality, or they had an irregular temporal structure without beat-inducing qualities. Overall, the visual rhythms were discriminated more poorly than the auditory rhythms. The beat-based structure, however, increased accuracy for visual as well as auditory rhythms. These results indicate that beat perception can occur in the visual modality and improve performance on a temporal discrimination task, when certain types of stimuli are used.
Article
Full-text available
Moving in synchrony with an auditory rhythm requires predictive action based on neurodynamic representation of temporal information. Although it is known that a regular auditory rhythm can facilitate rhythmic movement, the neural mechanisms underlying this phenomenon remain poorly understood. In this experiment using human magnetoencephalography, 12 young healthy adults listened passively to an isochronous auditory rhythm without producing rhythmic movement. We hypothesized that the dynamics of neuromagnetic beta-band oscillations (~20 Hz)-which are known to reflect changes in an active status of sensorimotor functions-would show modulations in both power and phase-coherence related to the rate of the auditory rhythm across both auditory and motor systems. Despite the absence of an intention to move, modulation of beta amplitude as well as changes in cortico-cortical coherence followed the tempo of sound stimulation in auditory cortices and motor-related areas including the sensorimotor cortex, inferior-frontal gyrus, supplementary motor area, and the cerebellum. The time course of beta decrease after stimulus onset was consistent regardless of the rate or regularity of the stimulus, but the time course of the following beta rebound depended on the stimulus rate only in the regular stimulus conditions such that the beta amplitude reached its maximum just before the occurrence of the next sound. Our results suggest that the time course of beta modulation provides a mechanism for maintaining predictive timing, that beta oscillations reflect functional coordination between auditory and motor systems, and that coherence in beta oscillations dynamically configure the sensorimotor networks for auditory-motor coupling.
Conference Paper
Full-text available
Pulse and meter are remarkable in part because these perceived periodicities can arise from rhythmic stimuli that are not periodic. This phenomenon is most striking in syncopated rhythms, found in many genres of music, including music of non-Western cultures. In general, syncopated rhythms may have energy at frequencies that do not correspond to perceived pulse or meter, and perceived metrical frequencies that are weak or absent in the objective rhythmic stimulus. In this paper, we consider syncopated rhythms that contain little or no energy at the pulse frequency. We used 16 rhythms (3 simple, 13 syncopated) to test a model of pulse/meter perception based on nonlinear resonance, comparing the nonlinear resonance model with a linear analysis. Both models displayed the ability to differentiate between duple and triple meters, however, only the nonlinear model exhibited resonance at the pulse frequency for the most challenging syncopated rhythms. This result suggests that nonlinear resonance may provide a viable approach to pulse detection in syncopated rhythms.
Article
Full-text available
Feeling the beat and meter is fundamental to the experience of music. However, how these periodicities are represented in the brain remains largely unknown. Here, we test whether this function emerges from the entrainment of neurons resonating to the beat and meter. We recorded the electroencephalogram while participants listened to a musical beat and imagined a binary or a ternary meter on this beat (i.e., a march or a waltz). We found that the beat elicits a sustained periodic EEG response tuned to the beat frequency. Most importantly, we found that meter imagery elicits an additional frequency tuned to the corresponding metric interpretation of this beat. These results provide compelling evidence that neural entrainment to beat and meter can be captured directly in the electroencephalogram. More generally, our results suggest that music constitutes a unique context to explore entrainment phenomena in dynamic cognitive processing at the level of neural networks.
Article
Full-text available
Subjective accenting is a cognitive process in which identical auditory pulses at an isochronous rate turn into the percept of an accenting pattern. This process can be voluntarily controlled, making it a candidate for communication from human user to machine in a brain-computer interface (BCI) system. In this study we investigated whether subjective accenting is a feasible paradigm for BCI and how its time-structured nature can be exploited for optimal decoding from non-invasive EEG data. Ten subjects perceived and imagined different metric patterns (two-, three- and four-beat) superimposed on a steady metronome. With an offline classification paradigm, we classified imagined accented from non-accented beats on a single trial (0.5 s) level with an average accuracy of 60.4% over all subjects. We show that decoding of imagined accents is also possible with a classifier trained on perception data. Cyclic patterns of accents and non-accents were successfully decoded with a sequence classification algorithm. Classification performances were compared by means of bit rate. Performance in the best scenario translates into an average bit rate of 4.4 bits min(-1) over subjects, which makes subjective accenting a promising paradigm for an online auditory BCI.
Article
Full-text available
Perceiving musical rhythms can be considered a process of attentional chunking over time, driven by accent patterns. A rhythmic structure can also be generated internally, by placing a subjective accent pattern on an isochronous stimulus train. Here, we investigate the event-related potential (ERP) signature of actual and subjective accents, thus disentangling low-level perceptual processes from the cognitive aspects of rhythm processing. The results show differences between accented and unaccented events, but also show that different types of unaccented events can be distinguished, revealing additional structure within the rhythmic pattern. This structure is further investigated by decomposing the ERP into subcomponents, using principal component analysis. In this way, the processes that are common for perceiving a pattern and self-generating it are isolated, and can be visualized for the tasks separately. The results suggest that top-down processes have a substantial role in the cerebral mechanisms of rhythm processing, independent of an externally presented stimulus.
Article
Full-text available
The frontal-striatal circuits, the cerebellum, and motor cortices play crucial roles in processing timing information on second to millisecond scales. However, little is known about the physiological mechanism underlying human's preference to robustly encode a sequence of time intervals into a mental hierarchy of temporal units called meter. This is especially salient in music: temporal patterns are typically interpreted as integer multiples of a basic unit (i.e., the beat) and accommodated into a global context such as march or waltz. With magnetoencephalography and spatial-filtering source analysis, we demonstrated that the time courses of neural activities index a subjectively induced meter context. Auditory evoked responses from hippocampus, basal ganglia, and auditory and association cortices showed a significant contrast between march and waltz metric conditions during listening to identical click stimuli. Specifically, the right hippocampus was activated differentially at 80 ms to the march downbeat (the count one) and approximately 250 ms to the waltz downbeat. In contrast, basal ganglia showed a larger 80 ms peak for march downbeat than waltz. The metric contrast was also expressed in long-latency responses in the right temporal lobe. These findings suggest that anticipatory processes in the hippocampal memory system and temporal computation mechanism in the basal ganglia circuits facilitate endogenous activities in auditory and association cortices through feedback loops. The close interaction of auditory, motor, and limbic systems suggests a distributed network for metric organization in temporal processing and its relevance for musical behavior.
Article
Full-text available
Individual performers in ensembles must attend simultaneously to their own part and parts played by others. Thus, they allocate attentional resources skilfully and flexibly between different sound sources in order to (a) monitor their own part and other parts, and (b) group together elements from these parts to derive the whole ensemble texture. The theory of Attentional Resource Allocation in Musical Ensemble Performance (ARAMEP) presented here accounts for how attentional flexibility is influenced by various musical and extramusical factors. It is claimed that these factors act directly upon cognitive/motor mechanisms that regulate attentional resource allocation. Particular focus is given to the role of meter in modulating resources in a manner that is plastic and efficient, and hence conducive to optimal attentional flexibility. Specifically, metric frameworks enable the availability of resources to be varied systematically, so as to compensate for fluctuations in resource activity that arise due to variability in the concentration of events at different metric locations in the music.
Article
Full-text available
Our perceptions are shaped by both extrinsic stimuli and intrinsic interpretation. The perceptual experience of a simple rhythm, for example, depends upon its metrical interpretation (where one hears the beat). Such interpretation can be altered at will, providing a model to study the interaction of endogenous and exogenous influences in the cognitive organization of perception. Using magnetoencephalography (MEG), we measured brain responses evoked by a repeating, rhythmically ambiguous phrase (two tones followed by a rest). In separate trials listeners were instructed to impose different metrical organizations on the rhythm by mentally placing the downbeat on either the first or the second tone. Since the stimulus was invariant, differences in brain activity between the two conditions should relate to endogenous metrical interpretation. Metrical interpretation influenced early evoked neural responses to tones, specifically in the upper beta range (20-30 Hz). Beta response was stronger (by 64% on average) when a tone was imagined to be the beat, compared to when it was not. A second experiment established that the beta increase closely resembles that due to physical accents, and thus may represent the genesis of a subjective accent. The results demonstrate endogenous modulation of early auditory responses, and suggest a unique role for the beta band in linking of endogenous and exogenous processing. Given the suggested role of beta in motor processing and long-range intracortical coordination, it is hypothesized that the motor system influences metrical interpretation of sound, even in the absence of overt movement.
Article
Full-text available
Author Summary Attention is the cognitive process underlying our ability to focus on specific aspects of our environment while ignoring others. By its very definition, attention plays a key role in differentiating foreground (the object of attention) from unattended clutter, or background. We investigate the neural basis of this phenomenon by engaging listeners to attend to different components of a complex acoustic scene. We present a spectrally and dynamically rich, but highly controlled, stimulus while participants perform two complementary tasks: to attend either to a repeating target note in the midst of random interferers (“maskers”), or to the background maskers themselves. Simultaneously, the participants' neural responses are recorded using the technique of magnetoencephalography (MEG). We hold all physical parameters of the stimulus fixed across the two tasks while manipulating one free parameter: the attentional state of listeners. The experimental findings reveal that auditory attention strongly modulates the sustained neural representation of the target signals in the direction of boosting foreground perception, much like known effects of visual attention. This enhancement originates in auditory cortex, and occurs exclusively at the frequency of the target rhythm. The results show a strong interaction between the neural representation of the attended target with the behavioral task demands, the bottom-up saliency of the target, and its perceptual detectability over time.
Article
Full-text available
To shed light on how humans can learn to understand music, we need to discover what the perceptual capabilities with which infants are born. Beat induction, the detection of a regular pulse in an auditory signal, is considered a fundamental human trait that, arguably, played a decisive role in the origin of music. Theorists are divided on the issue whether this ability is innate or learned. We show that newborn infants develop expectation for the onset of rhythmic cycles (the downbeat), even when it is not marked by stress or other distinguishing spectral features. Omitting the downbeat elicits brain activity associated with violating sensory expectations. Thus, our results strongly support the view that beat perception is innate.
Article
Full-text available
When auditory stimuli are presented at rates near 40/s, they evoke a steady-state middle latency response. This results from the super-position of the transient responses evoked by each of the rapidly presented stimuli. The steady-state evoked potentials are most appropriately analyzed using frequency-based techniques. The response is larger for stimuli of higher intensity and of lower tonal frequency. The amplitude of the response varies with the state of arousal of the subject. Sleep results in a decrease in the amplitude to between one third and one half of the amplitude during wakefulness. The response is even further attenuated by general anesthesia. This auditory steady-state evoked potential may therefore be helpful in monitoring the state of arousal of a patient undergoing anesthesia.
Article
Full-text available
Investigations of the psychological representation for musical meter provided evidence for an internalized hierarchy from 3 sources: frequency distributions in musical compositions, goodness-of-fit judgments of temporal patterns in metrical contexts, and memory confusions in discrimination judgments. The frequency with which musical events occurred in different temporal locations differentiates one meter from another and coincides with music-theoretic predictions of accent placement. Goodness-of-fit judgments for events presented in metrical contexts indicated a multileveled hierarchy of relative accent strength, with finer differentiation among hierarchical levels by musically experienced than inexperienced listeners. Memory confusions of temporal patterns in a discrimination task were characterized by the same hierarchy of inferred accent strength. These findings suggest mental representations for structural regularities underlying musical meter that influence perceiving, remembering, and composing music.
Article
Full-text available
Melody processing in unilaterally brain-damaged patients was investigated by manipulating the availability of contour and metre for discrimination in melodies varying, respectively, on the pitch dimension and the temporal dimension. On the pitch dimension, right brain-damaged patients, in contrast to left brain-damaged patients and normal controls, were found to be little affected by the availability of contour as a discrimination cue. However, both brain-damaged groups were impaired on tasks requiring consideration of pitch interval structure. These findings are consistent with hierarchical contribution of the cerebral hemispheres, with the right hemisphere being primary in representing the melody in terms of its global contour and the left hemisphere by filling in the intervallic structure. On the temporal dimension, only the discrimination of durational values (the rhythm) was found to be impaired by a lesion in either hemisphere, which spared, however, the metric interpretation of the musical sequences. These latter results are discussed in the light of current models of temporal processing. Finally, evidence of double dissociation between the processing of the pitch dimension and the processing of rhythm was obtained, providing further support for the need to fractionate musical perceptual abilities in order to arrive at a theory as to how the two hemispheres cohere to produce a musical interpretation of the auditory input.
Article
Full-text available
The present experiment tested the suggestion that human listeners may exploit durational information in speech to parse continuous utterances into words. Listeners were presented with six-syllable unpredictable utterances under noise-masking, and were required to judge between alternative word strings as to which best matched the rhythm of the masked utterances. For each utterance there were four alternative strings: (a) an exact rhythmic and word boundary match, (b) a rhythmic mismatch, and (c) two utterances with the same rhythm as the masked utterance, but different word boundary locations. Listeners were clearly able to perceive the rhythm of the masked utterances: The rhythmic mismatch was chosen significantly less often than any other alternative. Within the three rhythmically matched alternatives, the exact match was chosen significantly more often than either word boundary mismatch. Thus, listeners both perceived speech rhythm and used durational cues effectively to locate the position of word boundaries.
Article
Full-text available
Computer techniques readily extract from the brainwaves an orderly sequence of brain potentials locked in time to sound stimuli. The potentials that appear 8 to 80 msec after the stimulus resemble 3 or 4 cycles of a 40-Hz sine wave; we show here that these waves combined to form a single, stable, composite wave when the sounds are repeated at rates around 40 per sec. This phenomenon, the 40-Hz event-related potential (ERP), displays several properties of theoretical and practical interest. First, it reportedly disappears with surgical anesthesia, and it resembles similar phenomena in the visual and olfactory system, facts which suggest that adequate processing of sensory information may require cyclical brain events in the 30- to 50-Hz range. Second, latency and amplitude measurements on the 40-Hz ERP indicate it may contain useful information on the number and basilar membrane location of the auditory nerve fibers a given tone excites. Third, the response is present at sound intensities very close to normal adult thresholds for the audiometric frequencies, a fact that could have application in clinical hearing testing.
Article
Full-text available
Music processing ability was studied in 65 right-handed patients who had undergone unilateral temporal cortectomy for the relief of intractable epilepsy, and 24 matched normal controls. The ability to recognize changes in note intervals and to distinguish between different rhythms and metres was tested by presentation of sequences of simple musical phrases with variations in either pitch or temporal dimensions. The responses (right or wrong) enabled us to determine in which component of the music processing mechanism the patients had deficits and hence, knowing the positions of the surgical lesions, to identify their separate cerebral locations. The results showed that a right temporal cortectomy impaired the use of both contour and interval information in the discrimination of melodies and a left temporal cortectomy impaired only the use of interval information. Moreover, they underlined the importance of the superior temporal gyrus in melody processing. The excision of a part of the auditory areas (posterior part of the superior temporal gyrus) was found to be most detrimental for pitch and temporal variation processing. In the temporal dimension, we observed a dissociation between metre and rhythm and the critical involvement of the anterior part of the superior temporal gyrus in metric processing. This study highlights the relevance of dissociating musical abilities into their most significant cognitive components in order to identify their separate cerebral locations.
Article
The beneficial effects of musical training are not limited to enhancement of musical skills, but extend to language skills. Here, we review evidence that musical training can enhance reading ability. First, we discuss five subskills underlying reading acquisition-phonological awareness, speech-in-noise perception, rhythm perception, auditory working memory, and the ability to learn sound patterns-and show that each is linked to music experience. We link these five subskills through a unifying biological framework, positing that they share a reliance on auditory neural synchrony. After laying this theoretical groundwork for why musical training might be expected to enhance reading skills, we review the results of longitudinal studies providing evidence for a role for musical training in enhancing language abilities. Taken as a whole, these findings suggest that musical training can provide an effective developmental educational strategy for all children, including those with language learning impairments.
Article
Abstract The cognitive strategies by which humans process complex, metrically-ambiguous rhythmic patterns remain poorly understood. We investigated listeners' abilities to perceive, process and produce complex, syncopated rhythmic patterns played against a regular sequence of pulses. Rhythmic complexity,was varied along a continuum; complexity,was quantified using an objective metric of syncopation suggested by Longuet-Higgins and Lee. We used a recognition memory,task to assess the immediate,and longer-term perceptual salience and memorability,of rhythmic,patterns. The tasks required subjects to (a) tap in time to the rhythms, (b) reproduce these same rhythm patterns given a steady pulse, and (c) recognize these patterns when replayed both immediately after the other tasks, and after a 24-hour delay. Subjects tended to reset the phase of their internally generated pulse with highly complex, syncopated rhythms, often pursuing a strategy of reinterpreting or "re-hearing" the rhythm as less syncopated. Thus, greater complexity in rhythmic stimuli leads to a reorganization of the cognitive representation of the temporal structure of events. Less complex rhythms,were also more,robustly encoded,into long-term memory,than more,complex,syncopated rhythms,in the delayed memory,task. 3
Article
Even within equitonal isochronous sequences, listeners report perceiving differences among the tones, reflecting some grouping and accenting of the sound events. In a previous study, we explored this phenomenon of �subjective rhythmization� physiologically through brain event-related potentials (ERPs). We found differences in the ERP responses to small intensity deviations introduced in different positions of isochronous sequences, even though all sound events were physically identical. These differences seemed to follow a binary pattern, with larger amplitudes in the response elicited by deviants in odd-numbered than in even-numbered positions. The experiments reported here were designed to test whether the differences observed corresponded to a metrical pattern, by using a similar design in sequences of a binary (long-short) or a ternary (long-short-short) meter. We found a similar pattern of results in the binary condition, but a significantly different pattern in the ternary one. Importantly, the amplitude of the ERP response was largest in positions corresponding to strong beats in all conditions. These results support the notion of a binary default metrical pattern spontaneously imposed by listeners, and a better processing of the first (accented) event in each perceptual group. The differences were mainly observed in a late, attention-dependent component of the ERPs, corresponding to rather high-level processing.
Article
Segmentation of continuous speech into its component words is a nontrivial task for listeners. Previous work has suggested that listeners develop heuristic segmentation procedures based on experience with the structure of their language; for English, the heuristic is that strong syllables (containing full vowels) are most likely to be the initial syllables of lexical words, whereas weak syllables (containing central, or reduced, vowels) are nonword-initial, or, if word-initial, are grammatical words. This hypothesis is here tested against natural and laboratory-induced missegmentations of continuous speech. Precisely the expected pattern is found: listeners erroneously insert boundaries before strong syllables but delete them before weak syllables; boundaries inserted before strong syllables produce lexical words, while boundaries inserted before weak syllables produce grammatical words.
Article
To gain insight into the internal representation of temporal patterns, we studied the perception and reproduction of tone sequences in which only the tone-onset intervals were varied. A theory of the processing of such sequences, partly implemented as a computer program, is presented. A basic assumption of the theory is that perceivers try to generate an internal clock while listening to a temporal pattern. This internal clock is of a flexible nature that adapts itself to certain characteristics of the pattern under consideration. The distribution of accented events perceived in the sequence is supposed to determine whether a clock can (and which clock will) be generated internally. Further it is assumed that if a clock is induced in the perceiver, it will be used as a measuring device to specify the temporal structure of the pattern. The nature of this specification is formalized in a tentative coding model. Three experiments are reported that test different aspects of the model. In Experiment 1, subjects reproduced various temporal patterns that only differed structurally in order to test the hypothesis that patterns more readily inducing an internal clock will give rise to more accurate percepts. In Experiment 2, clock induction is manipulated experimentally to test the clock notion more directly. Experiment 3 tests the coding portion of the model by correlating theoretical complexity of temporal patterns based on the coding model with complexity judgments. The experiments yield data that support the theoretical ideas.