ArticlePDF Available

A Unified Model of Time Perception Accounts for Duration-Based and Beat-Based Timing Mechanisms

Authors:

Abstract and Figures

Accurate timing is an integral aspect of sensory and motor processes such as the perception of speech and music and the execution of skilled movement. Neuropsychological studies of time perception in patient groups and functional neuroimaging studies of timing in normal participants suggest common neural substrates for perceptual and motor timing. A timing system is implicated in core regions of the motor network such as the cerebellum, inferior olive, basal ganglia, pre-supplementary, and supplementary motor area, pre-motor cortex as well as higher-level areas such as the prefrontal cortex. In this article, we assess how distinct parts of the timing system subserve different aspects of perceptual timing. We previously established brain bases for absolute, duration-based timing and relative, beat-based timing in the olivocerebellar and striato-thalamo-cortical circuits respectively (Teki et al., 2011). However, neurophysiological and neuroanatomical studies provide a basis to suggest that timing functions of these circuits may not be independent. Here, we propose a unified model of time perception based on coordinated activity in the core striatal and olivocerebellar networks that are interconnected with each other and the cerebral cortex through multiple synaptic pathways. Timing in this unified model is proposed to involve serial beat-based striatal activation followed by absolute olivocerebellar timing mechanisms.
A) Absolute and relative timing task. Irreg: a sequence of clicks with an average of 15% jitter was used to study absolute, duration-based timing. Participants were required to compare the duration of the final interval, Tn to the penultimate interval, Tn − 1 where the final interval, Tn incorporates a difference (ΔT) of 30% of the inter-onset interval (range: 440–560 ms) from that of the preceding interval such that Tn = Tn − 1 ± ΔT30%. Reg: a sequence of clicks with no jitter is used to study relative, beat-based timing. Participants were required to compare the duration of the final interval, Tn to the penultimate interval, Tn − 1 where the final interval, Tn incorporates a difference (ΔT) of 15% of the inter-onset interval from that of the preceding interval such that Tn = Tn − 1 ± ΔT15% (cf. Teki et al., 2011 for further stimulus details). (B) A unified model of time perception. The striatal network (in blue) and the olivocerebellar network (in green) are connected to each other via multiple loops, and with the thalamus, pre-SMA/SMA, and the cerebral cortex. Dopaminergic pathways are shown in orange, inhibitory projections in red, excitatory and known anatomical connections in solid and dashed black lines respectively. IO, inferior olive; VTA, ventral tegmental area; GPe, globus pallidus external; GPi, globus pallidus internal; STN, subthalamic nucleus; SNpc, substantia nigra pars compacta; SNpr, substantia nigra pars reticulata. (C) Timing mechanism underlying the unified model. To estimate an interval of duration T, both the striato-thalamo-cortical networks and olivocerebellar networks act in parallel to produce timing signals TSBF and TOC respectively such that the combined output of the system approximates the length of the criterion time interval, T.
… 
Content may be subject to copyright.
A preview of the PDF is not available
... However, these two timing mechanisms, which have been applied to the context of musical rhythm more specifically, are not mutually exclusive (Penhune and Zatorre, 2019). Multiple frameworks assert a unified and interactive account for beat-based and duration-based timing, supported by functional and anatomical connections between the cerebellum and striato-thalamo-cortical circuits (Bostan and Strick, 2018;Koch et al., 2009;Petter et al., 2016;Schwartze and Kotz, 2013;Teki et al., 2012). Additionally, dependent on the task, there can be individual differences in the extent to which these two timing mechanisms are employed (Grahn and McAuley, 2009). ...
... We also found activation of the cerebellum bilaterally. While some functional neuroimaging (Teki et al., 2011) and neuropsychological studies (Breska and Ivry, 2018;Grube et al., 2010) suggest dissociations between striatal/SMA and cerebellar roles for beat-based and duration-based timing respectively, multiple frameworks propose an integrated account of these two timing mechanisms (Petter et al., 2016;Schwartze and Kotz, 2013;Teki et al., 2012). The neural circuits subserving beat-based and duration-based timing likely function as a unified system rather than segregated operations. ...
... Our descriptive characterization of the studies included in the metaanalysis highlights vast differences in the type and description of stimuli and tasks. Despite these design differences, however, results converge across studies to reveal brain networks for musical rhythm that align with previous literature on timing frameworks (Merchant, 2014;Petter et al., 2016;Schwartze and Kotz, 2013;Teki et al., 2011Teki et al., , 2012. The interesting heterogeneity brought to light by our meta-analysis provides new directions for the design and implementation of future fMRI studies of musical rhythm. ...
Article
We conducted a systematic review and meta-analysis of 30 functional magnetic resonance imaging studies investigating processing of musical rhythms in neurotypical adults. First, we identified a general network for musical rhythm, encompassing all relevant sensory and motor processes (Beat-based, rest baseline, 12 contrasts) which revealed a large network involving auditory and motor regions. This network included the bilateral superior temporal cortices, supplementary motor area (SMA), putamen, and cerebellum. Second, we identified more precise loci for beat-based musical rhythms (Beat-based, audio-motor control, 8 contrasts) in the bilateral putamen. Third, we identified regions modulated by beat based rhythmic complexity (Complexity, 16 contrasts) which included the bilateral SMA-proper/pre-SMA, cerebellum, inferior parietal regions, and right temporal areas. This meta-analysis suggests that musical rhythm is largely represented in a bilateral cortico-subcortical network. Our findings align with existing theoretical frameworks about auditory-motor coupling to a musical beat and provide a foundation for studying how the neural bases of musical rhythm may overlap with other cognitive domains.
... However, all sub-second intervals do not require the same level of sensorimotor engagement. For example, sub-second intervals that are embedded in complex musical rhythms rely on predictive mechanisms that are distinct from the mechanisms of absolute interval timing (Teki et al., 2011(Teki et al., , 2012Patel and Iversen, 2014;Iversen and Balasubramaniam, 2016;Ross et al., 2016b). Absolute interval timing between auditory events may rely on "interval" timing mechanisms and music may require "beat" timing, a continuous process that involves finding the underlying pulse in auditory events with some rhythmicity (Figure 1A). ...
... Because we must plan for a synchronized movement in advance, and there is some automaticity to this planning when we listen to auditory rhythms, it is reasonable to ask whether we also perform some degree of motor planning every time we perceive a rhythm even if we do not move any body part in time with it. Musical rhythms can be used to learn about neural signatures of and substrates for timing (Teki et al., 2011(Teki et al., , 2012Arnal, 2012;Morillon and Baillet, 2017). Musical rhythms are complex, hierarchical patterns of auditory events that induce perceptual constructs of timing and engage motor networks in the brain. ...
Article
Full-text available
Neural mechanisms supporting time perception in continuously changing sensory environments may be relevant to a broader understanding of how the human brain utilizes time in cognition and action. In this review, we describe current theories of sensorimotor engagement in the support of subsecond timing. We focus on musical timing due to the extensive literature surrounding movement with and perception of musical rhythms. First, we define commonly used but ambiguous concepts including neural entrainment, simulation and prediction in the context of musical timing. Next, we summarize the literature on sensorimotor timing during perception and performance and describe current theories of sensorimotor engagement in the support of subsecond timing. We review the evidence supporting that sensorimotor engagement is critical in accurate time perception. Finally, potential clinical implications for a sensorimotor perspective of timing are highlighted.
... Like simple interval-based speech rhythm metrics, the nPVI demonstrates weak predictive power for classification (Nolan and Asu, 2009;Arvaniti, 2012). Moreover, this measure conveys information about durational timing, which may be neurally (Teki et al., 2011(Teki et al., , 2012Breska and Ivry, 2016) and behaviourally (Tierney and Kraus, 2015; Pope and Studenka, 2019) distinct from the form of timing more typically associated with rhythm, which is event timing (Leow and Grahn, 2014). An intuitive example of the difference between durational and event timing can be borrowed from tennis, where you would use durational timing to measure the length of time required to perform a serve, and event timing to describe the pattern of recurring shots between two players engaged in a rally. ...
... Durational timing and event timing may be supported by distinct neural architectures (Teki et al., 2011(Teki et al., , 2012, and behavioural experiments have found that skills and aptitudes associated with each form of timing are to some extent dissociable (Tierney and Kraus, 2015). What this means for speech rhythm research is that durational measures like the nPVI may tell us less about ecologically relevant aspects of the acoustic speech signal. ...
Thesis
Full-text available
Speech rhythm can be described as the temporal patterning by which speech events, such as vocalic onsets, occur. Despite efforts to quantify and model speech rhythm across languages, it remains a scientifically enigmatic aspect of prosody. For instance, one challenge lies in determining how to best quantify and analyse speech rhythm. Techniques range from manual phonetic annotation to the automatic extraction of acoustic features. It is currently unclear how closely these differing approaches correspond to one another. Moreover, the primary means of speech rhythm research has been the analysis of the acoustic signal only. Investigations of speech rhythm may instead benefit from a range of complementary measures, including physiological recordings, such as of respiratory effort. This thesis therefore combines acoustic recording with inductive plethysmography (breath belts) to capture temporal characteristics of speech and speech breathing rhythms. The first part examines the performance of existing phonetic and algorithmic techniques for acoustic prosodic analysis in a new corpus of rhythmically diverse English and Mandarin speech. The second part addresses the need for an automatic speech breathing annotation technique by developing a novel function that is robust to the noisy plethysmography typical of spontaneous, naturalistic speech production. These methods are then applied in the following section to the analysis of English speech and speech breathing in a second, larger corpus. Finally, behavioural experiments were conducted to investigate listeners' perception of speech breathing using a novel gap detection task. The thesis establishes the feasibility, as well as limits, of automatic methods in comparison to manual annotation. In the speech breathing corpus analysis, they help show that speakers maintain a normative, yet contextually adaptive breathing style during speech. The perception experiments in turn demonstrate that listeners are sensitive to the violation of these speech breathing norms, even if unconsciously so. The thesis concludes by underscoring breathing as a necessary, yet often overlooked, component in speech rhythm planning and production.
... The basic cycle of the central production system consists of the contents of all the buffers being matched against the rules stored in procedural memory. Teki et al. (2012) integrated and refined the available cognitive models, then mapped the cognitive and behavioral functions of time perception and timing onto neural networks. ...
... These paradigms can be divided into retrospective timing tasks, which involve "surprise" prompts to reflect on how much time has passed, and prospective tasks, which involve advanced instruction to monitor an upcoming period of time. Within the more novel framework of Teki et al. (2012), retrospective paradigms purportedly emphasize reliance on reference memory cognitive processes, while prospective paradigms emphasize reliance on pacemaker and working memory cognitive processes, though this is not to mistakenly suggest neatly dichotomized subprocesses. Within these two paradigms of time estimation, response format is varied in ways that are thought to be preferentially sensitive to different underlying cognitive mechanisms that contribute to overall performance accuracy. ...
... Perceptual as well as motor timing recruits olivocerebellar and striato-thalamo-cortical circuits [10,11]. ...
Article
Full-text available
Accurate motor timing requires the temporally precise coupling between sensory input and motor output including the adjustment of movements with respect to changes in the environment. Such error correction has been related to a cerebello-thalamo-cortical network. At least partially distinct networks for the correction of perceived (i.e., conscious) as compared to nonperceived (i.e., nonconscious) errors have been suggested. While the cerebellum, the premotor, and the prefrontal cortex seem to be involved in conscious error correction, the network subserving nonconscious error correction is less clear. The present study is aimed at investigating the functional contribution of the primary motor cortex (M1) for both types of error correction in the temporal domain. To this end, anodal transcranial direct current stimulation (atDCS) was applied to the left M1 in a group of 18 healthy young volunteers during a resting period of 10 minutes. Sensorimotor synchronization as well as error correction of the right index finger was tested immediately prior to and after atDCS. Sham stimulation served as control condition. To induce error correction, nonconscious and conscious temporal step-changes were interspersed in a sequence of an isochronous auditory pacing signal in either direction (i.e., negative or positive) yielding either shorter or longer intervals. Prior to atDCS, faster error correction in conscious as compared to nonconscious trials was observed replicating previous findings. atDCS facilitated nonconscious error correction, but only in trials with negative step-changes yielding shorter intervals. In contrast to this, neither tapping speed nor synchronization performance with respect to the isochronous pacing signal was significantly modulated by atDCS. The data suggest M1 as part of a network distinctively contributing to the correction of nonconscious negative step-changes going beyond sensorimotor synchronization.
... Any global inferences that are drawn concerning how speech rhythm operates without regard for its embodied and temporal situatedness may, therefore, be misleading. Moreover, these metrics convey information about durational timing, which may be neurally (Breska and Ivry, 2016;Teki et al., 2011;Teki et al., 2012) and behaviourally (Pope and Studenka, 2019;Tierney and Kraus, 2015) distinct from the form of timing more closely associated with rhythm and motor sequencing, which is event timing (see Leow and Grahn, 2014, for review). An intuitive example of the difference between durational and event timing can be borrowed from tennis, where you would use durational timing to measure the length of time required to perform a serve and event timing to describe the pattern of recurring shots between two players engaged in a rally. ...
Article
Full-text available
The amplitude of the speech signal varies over time, and the speech envelope is an attempt to characterise this variation in the form of an acoustic feature. Although tacitly assumed, the similarity between the speech envelope-derived time series and that of phonetic objects (e.g., vowels) remains empirically unestablished. The current paper, therefore, evaluates several speech envelope extraction techniques, such as the Hilbert transform, by comparing different acoustic landmarks (e.g., peaks in the speech envelope) with manual phonetic annotation in a naturalistic and diverse dataset. Joint speech tasks are also introduced to determine which acoustic landmarks are most closely coordinated when voices are aligned. Finally, the acoustic landmarks are evaluated as predictors for the temporal characterisation of speaking style using classification tasks. The landmark that performed most closely to annotated vowel onsets was peaks in the first derivative of a human audition-informed envelope, consistent with converging evidence from neural and behavioural data. However, differences also emerged based on language and speaking style. Overall, the results show that both the choice of speech envelope extraction technique and the form of speech under study affect how sensitive an engineered feature is at capturing aspects of speech rhythm, such as the timing of vowels.
... Se propone recientemente un modelo unificador que conecta ambas vías para la representación contextual del tiempo, donde se hace referencia a aferencias auditivas del núcleo coclear hacia la oliva inferior, que lleva información al tálamo y al cerebelo, lo que permite contextualizar en el ambiente la representación del tiempo. Es así que la perturbación de los estímulos externos, como la hipoacusia, podría ocasionar una descoordinación con el medio externo, llevando a alteraciones motoras y del equilibrio (Teki et al., 2012). ...
Article
Full-text available
Introducción: A nivel neurofisiológico el cerebelo, los ganglios basales y el sistema límbico son importantes en la coordinación y memoria del movimiento. Objetivo: Comprender los procesos que intervienen en la relación sensoperceptiva de la audición y el aprendizaje motor es una motivación permanente de diferentes disciplinas. Método: Se presenta una revisión documental que tuvo por objetivo analizar la relación de la percepción auditiva y el aprendizaje motor utilizando análisis de contenido desde las perspectivas de audiología, medicina y neurorrehabilitación. Las palabras clave y combinaciones que se tuvieron en cuenta fueron: percepción auditiva, aprendizaje, equilibrio, coordinación y las combinaciones audición-aprendizaje, audición-equilibrio, y audición-coordinación. Se utilizaron las bases de datos y metabuscadores Pubmed, Medscape, Trip, ScienceDirect, EBSCOhots, Pedro, Scielo, y Lilacs. Bibliotecas virtuales como SINAB, Cochrane, Universidad de Málaga, UsNational Library of Medicine, y National Institutes of Health. Se seleccionaron 22 artículos que cumplieron con los criterios de inclusión. Resultados: Se encontró relación entre la percepción auditiva y el aprendizaje motor en la comunicación de la información sensorial auditiva y motora a nivel del procesamiento en el cerebelo y ganglios, que es una parte fundamental en la retención y transferencia motriz. Conclusión: En el proceso del aprendizaje motor que involucra la experiencia del movimiento, proponemos la participación de la audición, mediante integrar las señales percibidas–visuales, auditivas, motrices y vestibulares– que se concretan en mejorar el aprendizaje, haciéndolo más eficaz y generando una memoria más duradera.
Article
The sensory experience of transcranial magnetic stimulation (TMS) evokes cortical responses measured in electroencephalography (EEG) that confound interpretation of TMS-evoked potentials (TEPs). Methods for sensory masking have been proposed to minimize sensory contributions to the TEP, but the most effective combination for suprathreshold TMS to dorsolateral prefrontal cortex (dlPFC) is unknown. We applied sensory suppression techniques and quantified electrophysiology and perception from suprathreshold dlPFC TMS to identify the best combination to minimize the sensory TEP. In 21 healthy adults, we applied single pulse TMS at 120% resting motor threshold (rMT) to the left dlPFC and compared EEG vertex N100-P200 and perception. Conditions included three protocols: No masking (no auditory masking, no foam, and jittered interstimulus interval [ISI]), Standard masking (auditory noise, foam, and jittered ISI), and our ATTENUATE protocol (auditory noise, foam, over-the-ear protection, and unjittered ISI). ATTENUATE reduced vertex N100-P200 by 56%, "click" loudness perception by 50%, and scalp sensation by 36%. We show that sensory prediction, induced with predictable ISI, has a suppressive effect on vertex N100-P200, and that combining standard suppression protocols with sensory prediction provides the best N100-P200 suppression. ATTENUATE was more effective than Standard masking, which only reduced vertex N100-P200 by 22%, loudness by 27%, and scalp sensation by 24%. We introduce a sensory suppression protocol superior to Standard masking and demonstrate that using an unjittered ISI can contribute to minimizing sensory confounds. ATTENUATE provides superior sensory suppression to increase TEP signal-to-noise and contributes to a growing understanding of TMS-EEG sensory neuroscience.
Article
Joint music performance requires flexible sensorimotor coordination between self and other. Cognitive and sensory parameters of joint action—such as shared knowledge or temporal (a)synchrony—influence this coordination by shifting the balance between self-other segregation and integration. To investigate the neural bases of these parameters and their interaction during joint action, we asked pianists to play on an MR-compatible piano, in duet with a partner outside of the scanner room. Motor knowledge of the partner’s musical part and the temporal compatibility of the partner’s action feedback were manipulated. First, we found stronger activity and functional connectivity within cortico-cerebellar audio-motor networks when pianists had practiced their partner’s part before. This indicates that they simulated and anticipated the auditory feedback of the partner by virtue of an internal model. Second, we observed stronger cerebellar activity and reduced behavioral adaptation when pianists encountered subtle asynchronies between these model-based anticipations and the perceived sensory outcome of (familiar) partner actions, indicating a shift towards self-other segregation. These combined findings demonstrate that cortico-cerebellar audio-motor networks link motor knowledge and other-produced sounds depending on cognitive and sensory factors of the joint performance, and play a crucial role in balancing self-other integration and segregation.
Article
Humans have a natural tendency to move to music, which has been linked to the tight coupling between the auditory and motor system and the active role of the motor system in the perception of musical rhythms. High-groove music is particularly successful at inducing spontaneous movement, due to the engagement of (motor) prediction processes. However, how music listening transfers to the muscles even when no movement is intended is less known. Here we used cortico-muscular coherence (CMC) to investigate changes along the cortico-muscular pathway in response to different levels of groove in music listening without intention to move. Electroencephalography (EEG), Electromyography (EMG) from the finger and foot flexors, and continuous force signals were recorded in 18 participants while listening to either high-groove music, low-groove music or silence. Participants were required to hold a steady isometric contraction during all listening conditions. Subjective ratings confirmed that different levels of groove were successfully induced. However, no evidence was found for an effect of music, even high-groove music, on participants' CMC and ability to maintain a steady force for both upper and lower limbs irrespective of musical expertise. These results thus do not support a top-down influence of groove on cortico-muscular coupling. Nevertheless, it remains possible that such influence might occur in the form of dynamic modulations and/or with more active listening. Therefore, these results encourage further research to better understand the effects of groove on the motor system.
Article
Full-text available
The cerebellum is known to project via the thalamus to multiple motor areas of the cerebral cortex. In this study, we examined the extent and anatomical organization of cerebellar input to multiple regions of prefrontal cortex. We first used conventional retrograde tracers to map the origin of thalamic projections to five prefrontal regions: medial area 9 (9m), lateral area 9 (9l), dorsal area 46 (46d), ventral area 46, and lateral area 12. Only areas 46d, 9m, and 9l received substantial input from thalamic regions included within the zone of termination of cerebellar efferents. This suggested that these cortical areas were the target of cerebellar output. We tested this possibility using retrograde transneuronal transport of the McIntyre-B strain of herpes simplex virus type 1 from areas of prefrontal cortex. Neurons labeled by retrograde transneuronal transport of virus were found in the dentate nucleus only after injections into areas 46d, 9m, and 9l. The precise location of labeled neurons in the dentate varied with the prefrontal area injected. In addition, the dentate neurons labeled after virus injections into prefrontal areas were located in regions spatially separate from those labeled after virus injections into motor areas of the cerebral cortex. Our observations indicate that the cerebellum influences several areas of prefrontal cortex via the thalamus. Furthermore, separate output channels exist in the dentate to influence motor and cognitive operations. These results provide an anatomical substrate for the cerebellum to be involved in cognitive functions such as planning, working memory, and rule-based learning.
Article
Two experiments examined whether timing of short intervals is beat- or interval-based. In Experiment 1, subjects heard a sequence of standard tones followed by 2 test tones; they compared the interval between test tones to the interval between the standards. If optimal precision required beat-based timing, performance should be best in blocks in which the interval between standard and test reliably matched the standard interval. No such effect was observed. In Experiment 2, subjects heard 2 test tones and reproduced the intertone interval by producing 2 keypress responses. Entrainment to the beat was apparent: First-response latency clustered around the standard interval and was positively correlated with the produced interval. However, responses occurring on or near the beat showed no better temporal fidelity than off-beat responses. One plausible interpretation of these findings is that the brain always times brief intervals with an interval timer; however, this timer can be used in a cyclic fashion to trigger rhythmic responses.
Article
Three experiments on the recognition of short melodies investigated the influence of contour and interval information (respectively, the pattern of changes in pitch direction and the ordered sequence of pitch distances in a melody). Subjects rated pairs of melodies as "same" or "different" on a five-point scale. Six conditions were defined by two delays (short, 1 sec; and long, 30 sec) and three item types (target, related, and lure). In Target pairs, the second melody retained the contour and interval information of the first melody, being an exact transposition to another key. In Related pairs, only the contour information was retained, while in the Lure pairs neither contour nor interval information was retained. In conformity with the reports of Dowling and Bartlett (1981), the results indicated that contour information had a larger influence on recognition at short delays, whereas interval information had a relatively larger influence at long delays. The results are also consistent with an alternative interpretation stressing the importance of tonality/modality information in melody recognition at long delays.
Article
This study investigated the effects of different types of neurological deficits on timing functions. The performance of Parkinson, cerebellar, cortical, and peripheral neuropathy patients was compared to age-matched control subjects on two separate measures of timing functions. The first task involved the production of timed intervals in which the subjects attempted to maintain a simple rhythm. The second task measured the subjects' perceptual ability to discriminate between small differences in the duration of two intervals. The primacy of the cerebellum in timing functions was demonstrated by the finding that these were the only patients who showed a deficit in both the production and perception of timing tasks. The cerebellar group was found to have increased variability in performing rhythmic tapping and they were less accurate than the other groups in making perceptual discriminations regarding small differences in duration. Critically, this perceptual deficit appears to be specific to the perception of time since the cerebellar patients were unaffected in a control task measuring the perception of loudness. It is argued that the operation of a timing mechanism can be conceptualized as an isolable component of the motor control system. Furthermore, the results suggest that the domain of the cerebellar timing process is not limited to the motor system, but is employed by other perceptual and cognitive systems when temporally predictive computations are needed.
Article
In Experiment 1, six cyclically repeating interonset interval patterns (1,2:1,2:1:1,3:2:1,3:1:2, and 2:1:1:2) were each presented at six different note rates (very slow to very fast). Each trial began at a random point in the rhythmic cycle. Listeners were asked to tap along with the underlying beat or pulse. The number of times a given pulse (period, phase) was selected was taken as a measure of its perceptual salience. Responses gravitated toward a moderate pulse period of about 700 ms. At faster tempi, taps coincided more often with events followed by longer interonset intervals. In Experiment 2, listeners heard the same set of rhythmic patterns, plus a single sound in a different timbre, and were asked whether the extra sound fell on or off the beat. The position of the downbeat was found to be quite ambiguous. A quantitative model was developed from the following assumptions. The phenomenal accent of an event depends on the interonset interval that follows it, saturating for interonset intervals greater than about 1 s. The salience of a pulse sensation depends on the number of events matching a hypothetical isochronous template, and on the period of the template—pulse sensations are most salient in the vicinity of roughly 100 events per minute (moderate tempo). The metrical accent of an event depends on the saliences of pulse sensations including that event. Calculated pulse saliences and metrical accents according to the model agree well with experimental results (r > 0.85). The model may be extended to cover perceived meter, perceptible subdivisions of a beat, categorical perception, expressive timing, temporal precision and discrimination, and primacy/recency effects. The sensation of pulse may be the essential factor distinguishing musical rhythm from nonrhythm.