[show abstract][hide abstract] ABSTRACT: Synchronizing movements with rhythmic inputs requires tight coupling of sensory and motor neural processes. Here, using a novel approach based on the recording of steady-state-evoked potentials (SS-EPs), we examine how distant brain areas supporting these processes coordinate their dynamics. The electroencephalogram was recorded while subjects listened to a 2.4-Hz auditory beat and tapped their hand on every second beat. When subjects tapped to the beat, the EEG was characterized by a 2.4-Hz SS-EP compatible with beat-related entrainment and a 1.2-Hz SS-EP compatible with movement-related entrainment, based on the results of source analysis. Most importantly, when compared with passive listening of the beat, we found evidence suggesting an interaction between sensory- and motor-related activities when subjects tapped to the beat, in the form of 1) additional SS-EP appearing at 3.6 Hz, compatible with a nonlinear product of sensorimotor integration; 2) phase coupling of beat- and movement-related activities; and 3) selective enhancement of beat-related activities over the hemisphere contralateral to the tapping, suggesting a top-down effect of movement-related activities on auditory beat processing. Taken together, our results are compatible with the view that rhythmic sensorimotor synchronization is supported by a dynamic coupling of sensory and motor related activities.
[show abstract][hide abstract] ABSTRACT: We compared the electrophysiological correlates for the maintenance of non-musical tones sequences in auditory short-term memory (ASTM) to those for the short-term maintenance of sequences of coloured disks held in visual short-term memory (VSTM). The visual stimuli yielded a sustained posterior contralateral negativity (SPCN), suggesting that maintenance of sequences of coloured stimuli engaged structures similar to those involved in the maintenance of simultaneous visual displays. On the other hand, maintenance of acoustic sequences produced a sustained negativity at fronto-central sites. This component is named the Sustained Anterior Negativity (SAN). The amplitude of the SAN increased with increasing load in ASTM and predicted individual differences in performance. There was no SAN in a control condition with the same auditory stimuli but no memory task, nor one associated with visual memory. These results suggest the SAN is an index of brain activity related to the maintenance of representations in ASTM that is distinct from the maintenance of representations in VSTM.
[show abstract][hide abstract] ABSTRACT: Pitch deafness, the most commonly known form of congenital amusia, refers to a severe deficit in musical pitch processing (i.e., melody discrimination and recognition) that can leave time processing-including rhythm, metre, and "feeling the beat"-preserved. In Experiment 1, we show that by presenting musical excerpts in nonpitched drum timbres, rather than pitched piano tones, amusics show normal metre recognition. Experiment 2 reveals that body movement influences amusics' interpretation of the beat of an ambiguous drum rhythm. Experiment 3 and a subsequent exploratory study show an ability to synchronize movement to the beat of popular dance music and potential for improvement when given a modest amount of practice. Together the present results are consistent with the idea that rhythm and beat processing are spared in pitch deafness-that is, being pitch-deaf does not mean one is beat-deaf. In the context of drum music especially, amusics can be musical.
[show abstract][hide abstract] ABSTRACT: OBJECTIVE: To examine the mechanisms responsible for the reduction of the mismatch negativity (MMN) ERP component observed in response to pitch changes when the soundtrack of a movie is presented while recording the MMN. METHODS: In three experiments we measured the MMN to tones that differed in pitch from a repeated standard tone presented with a silent subtitled movie, with the soundtrack played forward or backward, or with soundtracks set at different intensity levels. RESULTS: MMN amplitude was reduced when the soundtrack was presented either forward or backward compared to the silent subtitled movie. With the soundtrack, MMN amplitude increased proportionally to the increments in the sound-to-noise intensity ratio. CONCLUSION: MMN was reduced in amplitude but had normal morphology with a concurrent soundtrack, most likely because of basic acoustical interference from the soundtrack with MMN-critical tones rather than from attentional effects. SIGNIFICANCE: A normal MMN can be recorded with a concurrent movie soundtrack, but signal amplitudes need to be set with caution to ensure a sufficiently high sound-to-noise ratio between MMN stimuli and the soundtrack.
Clinical neurophysiology: official journal of the International Federation of Clinical Neurophysiology 06/2013; · 3.12 Impact Factor
[show abstract][hide abstract] ABSTRACT: Congenital amusia, a neurogenetic disorder, affects primarily pitch and melody perception. Here we test the hypothesis that amusics suffer from impaired access to spectro-temporal fine-structure cues associated with low-order resolved harmonics. The hypothesis is motivated by the fact that tones containing only unresolved harmonics result in poorer pitch sensitivity in normal-hearing listeners. F0DLs were measured in amusics and matched controls for harmonic complexes containing either resolved or unresolved harmonics. Sensitivity to temporal-fine-structure was assessed via interaural-time-difference (ITD) thresholds, intensity resolution was probed via interaural-level-difference (ILD) thresholds and intensity difference limens, and spectral resolution was estimated using the notched-noise method. As expected, F0DLs were elevated in amusics for resolved harmonics; however, no difference between amusics and controls was found for F0DLs using unresolved harmonics. The deficit appears unlikely to be due to temporal-fine-structure coding, as ITD thresholds were unimpaired in the amusic group. In addition, no differences were found between the two groups in ILD thresholds, intensity difference limens, or auditory-filter bandwidths. Overall the results suggest a pitch-specific deficit in fine spectro-temporal information processing in amusia that cannot be ascribed to defective temporal-fine-structure or spectral encoding in the auditory periphery. [Work supported by Fyssen Foundation, Erasmus Mundus, CIHR, and NIH grant R01DC05216.].
The Journal of the Acoustical Society of America 05/2013; 133(5):3381. · 1.65 Impact Factor
[show abstract][hide abstract] ABSTRACT: Congenital amusia is a lifelong disorder characterized by a difficulty in perceiving and producing music despite normal intelligence and hearing. Behavioral data have indicated that it originates from a deficit in fine-grained pitch discrimination, and is expressed by the absence of a P3b event-related brain response for pitch differences smaller than a semitone and a bigger N2b-P3b brain response for large pitch differences as compared to controls. However, it is still unclear why the amusic brain overreacts to large pitch changes. Furthermore, another electrophysiological study indicates that the amusic brain can respond to changes in melodies as small as a quarter-tone, without awareness, by exhibiting a normal mismatch negativity (MMN) brain response. Here, we re-examine the event-related N2b-P3b components with the aim to clarify the cause of the larger amplitude observed by Peretz, Brattico, and Tervaniemi (2005), by experimentally matching the number of deviants presented to the controls according to the number of deviants detected by amusics. We also re-examine the MMN component as well as the N1 in an acoustical context to investigate further the pitch discrimination deficit underlying congenital amusia. In two separate conditions, namely ignore and attend, we measured the MMN, the N1, the N2b and the P3b to tones that deviated by an eight of a tone (25 cents) or whole tone (200 cents) from a repeated standard tone. The results show a normal MMN, a seemingly normal N1, a normal P3b for the 200 cents pitch deviance, and no P3b for the small 25 cents pitch differences in amusics. These results indicate that the amusic brain responds to small pitch differences at a pre-attentive level of perception, but is unable to detect consciously those same pitch deviances at a later attentive level. The results are consistent with previous MRI and fMRI studies indicating that the auditory cortex of amusic individuals is functioning normally.
Brain and Cognition 04/2013; 81(3):337-44. · 2.82 Impact Factor
[show abstract][hide abstract] ABSTRACT: We tested whether congenital amusics, who exhibit pitch perception deficits, nevertheless adjust the pitch of their voice in response to a sudden pitch shift applied to vocal feedback. Nine amusics and matched controls imitated their own previously-recorded speech or singing, while the online feedback they received was shifted mid-utterance by 25 or 200cents. While a few amusics failed to show pitch-shift effects, a majority showed a pitch-shift response and nearly half showed a normal response to both large and small shifts, with similar magnitudes and response times as controls. The size and presence of the shift response to small shifts were significantly predicted by participants' vocal pitch matching accuracy, rather than their ability to perceive small pitch changes. The observed dissociation between the ability to consciously perceive small pitch changes and to produce and monitor vocal pitch provides evidence for a dual-route model of pitch processing in the brain.
Brain and Language 03/2013; 125(1):106-117. · 3.39 Impact Factor
[show abstract][hide abstract] ABSTRACT: Infants prefer speech to non-vocal sounds and to non-human vocalizations, and they prefer happy-sounding speech to neutral speech. They also exhibit an interest in singing, but there is little knowledge of their relative interest in speech and singing. The present study explored infants' attention to unfamiliar audio samples of speech and singing. In Experiment 1, infants 4-13 months of age were exposed to happy-sounding infant-directed speech vs. hummed lullabies by the same woman. They listened significantly longer to the speech, which had considerably greater acoustic variability and expressiveness, than to the lullabies. In Experiment 2, infants of comparable age who heard the lyrics of a Turkish children's song spoken vs. sung in a joyful/happy manner did not exhibit differential listening. Infants in Experiment 3 heard the happily sung lyrics of the Turkish children's song vs. a version that was spoken in an adult-directed or affectively neutral manner. They listened significantly longer to the sung version. Overall, happy voice quality rather than vocal mode (speech or singing) was the principal contributor to infant attention, regardless of age.
[show abstract][hide abstract] ABSTRACT: A large body of literature now exists to substantiate the long-held idea that musicians' brains differ structurally and functionally from non-musicians' brains. These differences include changes in volume, morphology, density, connectivity, and function across many regions of the brain. In addition to the extensive literature that investigates these differences cross-sectionally by comparing musicians and non-musicians, longitudinal studies have demonstrated the causal influence of music training on the brain across the lifespan. However, there is a large degree of inconsistency in the findings, with discordance between studies, laboratories, and techniques. A review of this literature highlights a number of variables that appear to moderate the relationship between music training and brain structure and function. These include age at commencement of training, sex, absolute pitch (AP), type of training, and instrument of training. These moderating variables may account for previously unexplained discrepancies in the existing literature, and we propose that future studies carefully consider research designs and methodologies that control for these variables.
[show abstract][hide abstract] ABSTRACT: THE PRESENT STUDY INTRODUCES A NOVEL TOOL FOR ASSESSING MUSICAL ABILITIES IN CHILDREN: The Montreal Battery of Evaluation of Musical Abilities (MBEMA). The battery, which comprises tests of memory, scale, contour, interval, and rhythm, was administered to 245 children in Montreal and 91 in Beijing (Experiment 1), and an abbreviated version was administered to an additional 85 children in Montreal (in less than 20 min; Experiment 2). All children were 6-8 years of age. Their performance indicated that both versions of the MBEMA are sensitive to individual differences and to musical training. The sensitivity of the tests extends to Mandarin-speaking children despite the fact that they show enhanced performance relative to French-speaking children. Because this Chinese advantage is not limited to musical pitch but extends to rhythm and memory, it is unlikely that it results from early exposure to a tonal language. In both cultures and versions of the tests, amount of musical practice predicts performance. Thus, the MBEMA can serve as an objective, short and up-to-date test of musical abilities in a variety of situations, from the identification of children with musical difficulties to the assessment of the effects of musical training in typically developing children of different cultures.
[show abstract][hide abstract] ABSTRACT: The Musical Emotional Bursts (MEB) consist of 80 brief musical executions expressing basic emotional states (happiness, sadness and fear) and neutrality. These musical bursts were designed to be the musical analog of the Montreal Affective Voices (MAV)-a set of brief non-verbal affective vocalizations portraying different basic emotions. The MEB consist of short (mean duration: 1.6 s) improvisations on a given emotion or of imitations of a given MAV stimulus, played on a violin (10 stimuli × 4 [3 emotions + neutral]), or a clarinet (10 stimuli × 4 [3 emotions + neutral]). The MEB arguably represent a primitive form of music emotional expression, just like the MAV represent a primitive form of vocal, non-linguistic emotional expression. To create the MEB, stimuli were recorded from 10 violinists and 10 clarinetists, and then evaluated by 60 participants. Participants evaluated 240 stimuli [30 stimuli × 4 (3 emotions + neutral) × 2 instruments] by performing either a forced-choice emotion categorization task, a valence rating task or an arousal rating task (20 subjects per task); 40 MAVs were also used in the same session with similar task instructions. Recognition accuracy of emotional categories expressed by the MEB (n:80) was lower than for the MAVs but still very high with an average percent correct recognition score of 80.4%. Highest recognition accuracies were obtained for happy clarinet (92.0%) and fearful or sad violin (88.0% each) MEB stimuli. The MEB can be used to compare the cerebral processing of emotional expressions in music and vocal communication, or used for testing affective perception in patients with communication problems.
[show abstract][hide abstract] ABSTRACT: Fundamental to the experience of music, beat and meter perception refers to the perception of periodicities while listening to music occurring within the frequency range of musical tempo. Here, we explored the spontaneous building of beat and meter hypothesized to emerge from the selective entrainment of neuronal populations at beat and meter frequencies. The electroencephalogram (EEG) was recorded while human participants listened to rhythms consisting of short sounds alternating with silences to induce a spontaneous perception of beat and meter. We found that the rhythmic stimuli elicited multiple steady state-evoked potentials (SS-EPs) observed in the EEG spectrum at frequencies corresponding to the rhythmic pattern envelope. Most importantly, the amplitude of the SS-EPs obtained at beat and meter frequencies were selectively enhanced even though the acoustic energy was not necessarily predominant at these frequencies. Furthermore, accelerating the tempo of the rhythmic stimuli so as to move away from the range of frequencies at which beats are usually perceived impaired the selective enhancement of SS-EPs at these frequencies. The observation that beat- and meter-related SS-EPs are selectively enhanced at frequencies compatible with beat and meter perception indicates that these responses do not merely reflect the physical structure of the sound envelope but, instead, reflect the spontaneous emergence of an internal representation of beat, possibly through a mechanism of selective neuronal entrainment within a resonance frequency range. Taken together, these results suggest that musical rhythms constitute a unique context to gain insight on general mechanisms of entrainment, from the neuronal level to individual level.
Journal of Neuroscience 12/2012; 32(49):17572-17581. · 6.91 Impact Factor
[show abstract][hide abstract] ABSTRACT: Some combinations of musical notes sound pleasing and are termed "consonant," but others sound unpleasant and are termed "dissonant." The distinction between consonance and dissonance plays a central role in Western music, and its origins have posed one of the oldest and most debated problems in perception. In modern times, dissonance has been widely believed to be the product of "beating": interference between frequency components in the cochlea that has been believed to be more pronounced in dissonant than consonant sounds. However, harmonic frequency relations, a higher-order sound attribute closely related to pitch perception, has also been proposed to account for consonance. To tease apart theories of musical consonance, we tested sound preferences in individuals with congenital amusia, a neurogenetic disorder characterized by abnormal pitch perception. We assessed amusics' preferences for musical chords as well as for the isolated acoustic properties of beating and harmonicity. In contrast to control subjects, amusic listeners showed no preference for consonance, rating the pleasantness of consonant chords no higher than that of dissonant chords. Amusics also failed to exhibit the normally observed preference for harmonic over inharmonic tones, nor could they discriminate such tones from each other. Despite these abnormalities, amusics exhibited normal preferences and discrimination for stimuli with and without beating. This dissociation indicates that, contrary to classic theories, beating is unlikely to underlie consonance. Our results instead suggest the need to integrate harmonicity as a foundation of music preferences, and illustrate how amusia may be used to investigate normal auditory function.
Proceedings of the National Academy of Sciences 11/2012; · 9.74 Impact Factor
[show abstract][hide abstract] ABSTRACT: A longstanding issue in psychology is the relationship between how we perceive the world and how we act upon it. Pitch deafness provides an interesting opportunity to test for the independence of perception and production abilities in the speech domain. We tested eight amusics and eight matched controls for their ability to perceive pitch shifts in sentences and to imitate those same sentences. Congenital amusics were impaired in their ability to discriminate, but not to imitate different intonations in speech. These findings support the idea that, when we hear a vocally-imitatable sound, our brains encode it in two distinct ways- an abstract code, which allows us to identify it and compare it to other sounds, and a vocal-motor code, which allows us to imitate it.
[show abstract][hide abstract] ABSTRACT: The ability to carry a tune, natural for the majority, is underpinned by a complex functional system (i.e., the vocal sensorimotor loop, VSL). The VSL involves various components, including perceptual mechanisms, auditory-motor mapping, motor control, and memory. The malfunction of one of these components can bring about poor-pitch singing. So far, disturbed perception and deficient sensorimotor mapping have been treated as important causes of poor singing. Yet, memory has been paid relatively little attention. Here, we review results obtained from both occasional singers and individuals suffering from congenital amusia, who were asked to produce from memory or imitate a well-known melody under conditions with different memory loads. The findings point to memory as a relevant source of impairment in poor-pitch singing and to imitation as a useful aid for poor singers.
Annals of the New York Academy of Sciences 04/2012; 1252:338-44. · 4.38 Impact Factor
[show abstract][hide abstract] ABSTRACT: The acquisition of both speech and music uses general principles: learners extract statistical regularities present in the environment. Yet, individuals who suffer from congenital amusia (commonly called tone-deafness) have experienced lifelong difficulties in acquiring basic musical skills, while their language abilities appear essentially intact. One possible account for this dissociation between music and speech is that amusics lack normal experience with music. If given appropriate exposure, amusics might be able to acquire basic musical abilities. To test this possibility, a group of 11 adults with congenital amusia, and their matched controls, were exposed to a continuous stream of syllables or tones for 21-minute. Their task was to try to identify three-syllable nonsense words or three-tone motifs having an identical statistical structure. The results of five experiments show that amusics can learn novel words as easily as controls, whereas they systematically fail on musical materials. Thus, inappropriate musical exposure cannot fully account for the musical disorder. Implications of the results for the domain specificity of statistical learning are discussed.
Annals of the New York Academy of Sciences 04/2012; 1252:361-7. · 4.38 Impact Factor
[show abstract][hide abstract] ABSTRACT: The conference entitled "The Neurosciences and Music-IV: Learning and Memory'' was held at the University of Edinburgh from June 9-12, 2011, jointly hosted by the Mariani Foundation and the Institute for Music in Human and Social Development, and involving nearly 500 international delegates. Two opening workshops, three large and vibrant poster sessions, and nine invited symposia introduced a diverse range of recent research findings and discussed current research directions. Here, the proceedings are introduced by the workshop and symposia leaders on topics including working with children, rhythm perception, language processing, cultural learning, memory, musical imagery, neural plasticity, stroke rehabilitation, autism, and amusia. The rich diversity of the interdisciplinary research presented suggests that the future of music neuroscience looks both exciting and promising, and that important implications for music rehabilitation and therapy are being discovered.
Annals of the New York Academy of Sciences 04/2012; 1252:1-16. · 4.38 Impact Factor
[show abstract][hide abstract] ABSTRACT: Numerous studies have demonstrated the capacity of music to modulate pain. However, the neurophysiological mechanisms responsible for this phenomenon remain unknown. In order to assess the involvement of descending modulatory mechanisms in the modulation of pain by music, we evaluated the effects of musical excerpts conveying different emotions (pleasant-stimulating, pleasant-relaxing, unpleasant-stimulating) on the spinally mediated nociceptive flexion reflex (or RIII), as well as on pain ratings and skin conductance responses. The RIII reflex and pain ratings were increased during the listening of unpleasant music compared with pleasant music, suggesting the involvement of descending pain-modulatory mechanisms in the effects of musical emotions on pain. There were no significant differences between the pleasant-stimulating and pleasant-relaxing musical condition, indicating that the arousal of music had little influence on pain processing.
European journal of pain (London, England) 01/2012; 16(6):870-7. · 3.37 Impact Factor
[show abstract][hide abstract] ABSTRACT: Deficits for pitch structure processing in congenital amusia has been mostly reported for melodic stimuli and explicit judgments. The present study investigated congenital amusia with harmonic stimuli and a priming task. Amusic and control participants performed a speeded phoneme discrimination task on sung chord sequences. The target phoneme was sung either on a functionally important chord (tonic chord, referred to as "related target") or a less important one (subdominant chord, referred to as "less-related target"). Correct response times were faster when the target phoneme was sung on tonic chords rather than on subdominant chords, and this effect was less pronounced, albeit significant, in amusic participants. These data report for the first time a deficit in congenital amusia for chord processing, but also provide evidence that, despite this deficit, amusic individuals have internalized sophisticated syntactic-like functions of chords in the Western tonal musical system. This finding suggests that thanks to this musical knowledge, amusic individuals could develop expectancies for musical events, and, presumably, follow the tension-relaxation schemas in Western tonal music, which also influence emotional responses to music.
[show abstract][hide abstract] ABSTRACT: The strong association between music and speech has been supported by recent research focusing on musicians' superior abilities in second language learning and neural encoding of foreign speech sounds. However, evidence for a double association--the influence of linguistic background on music pitch processing and disorders--remains elusive. Because languages differ in their usage of elements (e.g., pitch) that are also essential for music, a unique opportunity for examining such language-to-music associations comes from a cross-cultural (linguistic) comparison of congenital amusia, a neurogenetic disorder affecting the music (pitch and rhythm) processing of about 5% of the Western population. In the present study, two populations (Hong Kong and Canada) were compared. One spoke a tone language in which differences in voice pitch correspond to differences in word meaning (in Hong Kong Cantonese, /si/ means 'teacher' and 'to try' when spoken in a high and mid pitch pattern, respectively). Using the On-line Identification Test of Congenital Amusia, we found Cantonese speakers as a group tend to show enhanced pitch perception ability compared to speakers of Canadian French and English (non-tone languages). This enhanced ability occurs in the absence of differences in rhythmic perception and persists even after relevant factors such as musical background and age were controlled. Following a common definition of amusia (5% of the population), we found Hong Kong pitch amusics also show enhanced pitch abilities relative to their Canadian counterparts. These findings not only provide critical evidence for a double association of music and speech, but also argue for the reconceptualization of communicative disorders within a cultural framework. Along with recent studies documenting cultural differences in visual perception, our auditory evidence challenges the common assumption of universality of basic mental processes and speaks to the domain generality of culture-to-perception influences.
PLoS ONE 01/2012; 7(4):e33424. · 3.73 Impact Factor