Isabelle Peretz

McGill University, Montréal, Quebec, Canada

Are you Isabelle Peretz?

Claim your profile

Publications (216)865.03 Total impact

  • Source
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The inability to vocally match a pitch can be caused by poor pitch perception or by poor vocal-motor control. Although previous studies have tried to examine the relationship between pitch perception and vocal production, they have failed to control for the timbre of the target to be matched. In the present study, we compare pitch-matching accuracy with an unfamiliar instrument (the slider) and with the voice, designed such that the slider plays back recordings of the participant's own voice. We also measured pitch accuracy in singing a familiar melody ("Happy Birthday") to assess the relationship between single-pitch-matching tasks and melodic singing. Our results showed that participants (all nonmusicians) were significantly better at matching recordings of their own voices with the slider than with their voice, indicating that vocal-motor control is an important limiting factor on singing ability. We also found significant correlations between the ability to sing a melody in tune and vocal pitch matching, but not pitch matching on the slider. Better melodic singers also tended to have higher quality voices (as measured by acoustic variables). These results provide important evidence about the role of vocal-motor control in poor singing ability and demonstrate that single-pitch-matching tasks can be useful in measuring general singing abilities.
    Attention Perception & Psychophysics 07/2014; · 1.97 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Previous studies have suggested that presenting to-be-memorised lyrics in a singing mode, instead of a speaking mode, may facilitate learning and retention in normal adults. In this study, seven healthy older adults and eight participants with mild Alzheimer's disease (AD) learned and memorised lyrics that were either sung or spoken. We measured the percentage of words recalled from these lyrics immediately and after 10 minutes. Moreover, in AD participants, we tested the effect of successive learning episodes for one spoken and one sung excerpt, as well as long-term retention after a four week delay. Sung conditions did not influence lyrics recall in immediate recall but increased delayed recall for both groups. In AD, learning slopes for sung and spoken lyrics did not show a significant difference across successive learning episodes. However, sung lyrics showed a slight advantage over spoken ones after a four week delay. These results suggest that singing may increase the load of initial learning but improve long-term retention of newly acquired verbal information. We further propose some recommendations on how to maximise these effects and make them relevant for therapeutic applications.
    Neuropsychological Rehabilitation 06/2014; · 2.01 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Intrinsic emotional expressions such as those communicated by faces and vocalizations have been shown to engage specific brain regions, such as the amygdala. Although music constitutes another powerful means to express emotions, the neural substrates involved in its processing remain poorly understood. In particular, it is unknown whether brain regions typically associated with processing "biologically-relevant" emotional expressions are also recruited by emotional music. To address this question, we conducted an event-related fMRI study in 47 healthy volunteers in which we directly compared responses to basic emotions (fear, sadness and happiness, as well as neutral) expressed through faces, nonlinguistic vocalizations and short, novel musical excerpts. Our results confirmed the importance of fear in emotional communication, as revealed by significant BOLD signal increased in a cluster within the posterior amygdala and anterior hippocampus, as well as in the posterior insula across all three domains. Moreover, subject-specific amygdala responses to fearful music and vocalizations were correlated, consistent with the proposal that the brain circuitry involved in the processing of musical emotions might be shared with the one that have evolved for vocalizations. Overall, our results show that processing of fear expressed through music, engages some of the same brain areas known to be crucial for detecting and evaluating threat-related information.
    Social Cognitive and Affective Neuroscience 05/2014; · 5.04 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We used magnetoencephalography (MEG) to examine brain activity related to the maintenance of non-verbal pitch information in auditory short-term memory (ASTM). We focused on brain activity that increased with the number of items effectively held in memory by the participants during the retention interval of an auditory memory task. We used very simple acoustic materials (i.e., pure tones that varied in pitch) that minimized activation from non-ASTM related systems. MEG revealed neural activity in frontal, temporal, and parietal cortex that increased with a greater number of items effectively held in memory by the participants during the maintenance of pitch representations in ASTM. The present results reinforce the functional role of frontal and temporal cortex in the retention of pitch information in ASTM. This is the first MEG study to provide both fine spatial localization and temporal resolution on the neural mechanisms of non-verbal ASTM for pitch in relation to individual differences in the capacity of ASTM. This research contributes to a comprehensive understanding of the mechanisms mediating the representation and maintenance of basic non-verbal auditory features in the human brain.
    NeuroImage 03/2014; · 6.25 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Strong links between music and motor functions suggest that music could represent an interesting aid for motor learning. The present study aims for the first time to test the potential of music to assist in the learning of sequences of gestures in normal and pathological aging. Participants with mild Alzheimer's disease (AD) and healthy older adults (controls) learned sequences of meaningless gestures that were either accompanied by music or a metronome. We also manipulated the learning procedure such that participants had to imitate the gestures to-be-memorized in synchrony with the experimenter or after the experimenter during encoding. Results show different patterns of performance for the two groups. Overall, musical accompaniment had no impact on the controls' performance but improved those of AD participants. Conversely, synchronization of gestures during learning helped controls but seemed to interfere with retention in AD. We discuss these findings regarding their relevance for a better understanding of auditory-motor memory, and we propose recommendations to maximize the mnemonic effect of music for motor sequence learning for dementia care.
    Frontiers in Human Neuroscience 01/2014; 8:294. · 2.91 Impact Factor
  • Source
    Dawn L Merrett, Isabelle Peretz, Sarah J Wilson
    [Show abstract] [Hide abstract]
    ABSTRACT: Singing has been used in language rehabilitation for decades, yet controversy remains over its effectiveness and mechanisms of action. Melodic Intonation Therapy (MIT) is the most well-known singing-based therapy; however, speculation surrounds when and how it might improve outcomes in aphasia and other language disorders. While positive treatment effects have been variously attributed to different MIT components, including melody, rhythm, hand-tapping, and the choral nature of the singing, there is uncertainty about the components that are truly necessary and beneficial. Moreover, the mechanisms by which the components operate are not well understood. Within the literature to date, proposed mechanisms can be broadly grouped into four categories: (1) neuroplastic reorganization of language function, (2) activation of the mirror neuron system and multimodal integration, (3) utilization of shared or specific features of music and language, and (4) motivation and mood. In this paper, we review available evidence for each mechanism and propose that these mechanisms are not mutually exclusive, but rather represent different levels of explanation, reflecting the neurobiological, cognitive, and emotional effects of MIT. Thus, instead of competing, each of these mechanisms may contribute to language rehabilitation, with a better understanding of their relative roles and interactions allowing the design of protocols that maximize the effectiveness of singing therapy for aphasia.
    Frontiers in Human Neuroscience 01/2014; 8:401. · 2.91 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Music is an integral part of the cultural heritage of all known human societies, with the capacity for music perception and production present in most people. Researchers generally agree that both genetic and environmental factors contribute to the broader realization of music ability, with the degree of music aptitude varying, not only from individual to individual, but across various components of music ability within the same individual. While environmental factors influencing music development and expertise have been well investigated in the psychological and music literature, the interrogation of possible genetic influences has not progressed at the same rate. Recent advances in genetic research offer fertile ground for exploring the genetic basis of music ability. This paper begins with a brief overview of behavioral and molecular genetic approaches commonly used in human genetic analyses, and then critically reviews the key findings of genetic investigations of the components of music ability. Some promising and converging findings have emerged, with several loci on chromosome 4 implicated in singing and music perception, and certain loci on chromosome 8q implicated in absolute pitch and music perception. The gene AVPR1A on chromosome 12q has also been implicated in music perception, music memory, and music listening, whereas SLC6A4 on chromosome 17q has been associated with music memory and choir participation. Replication of these results in alternate populations and with larger samples is warranted to confirm the findings. Through increased research efforts, a clearer picture of the genetic mechanisms underpinning music ability will hopefully emerge.
    Frontiers in Psychology 01/2014; 5:658. · 2.80 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Music and speech are two of the most relevant and common sounds in the human environment. Perceiving and processing these two complex acoustical signals rely on a hierarchical functional network distributed throughout several brain regions within and beyond the auditory cortices. Given their similarities, the neural bases for processing these two complex sounds overlap to a certain degree, but particular brain regions may show selectivity for one or the other acoustic category, which we aimed to identify. We examined 53 subjects (28 of them professional musicians) by functional magnetic resonance imaging (fMRI), using a paradigm designed to identify regions showing increased activity in response to different types of musical stimuli, compared to different types of complex sounds, such as speech and non-linguistic vocalizations. We found a region in the anterior portion of the superior temporal gyrus (planum polare) that showed preferential activity in response to musical stimuli and was present in all our subjects, regardless of musical training, and invariant across different musical instruments (violin, piano or synthetic piano). Our data show that this cortical region is preferentially involved in processing musical, as compared to other complex sounds, suggesting a functional role as a second-order relay, possibly integrating acoustic characteristics intrinsic to music (e.g., melody extraction). Moreover, we assessed whether musical experience modulates the response of cortical regions involved in music processing and found evidence of functional differences between musicians and non-musicians during music listening. In particular, bilateral activation of the planum polare was more prevalent, but not exclusive, in musicians than non-musicians, and activation of the right posterior portion of the superior temporal gyrus (planum temporale) differed between groups. Our results provide evidence of functional specialization for music processing in specific regions of the auditory cortex and show domain-specific functional differences possibly correlated with musicianship.
    Cortex 01/2014; · 6.16 Impact Factor
  • Anna Zumbansen, Isabelle Peretz, Sylvie Hébert
    [Show abstract] [Hide abstract]
    ABSTRACT: Melodic intonation therapy (MIT) is a structured protocol for language rehabilitation in people with Broca's aphasia. The main particularity of MIT is the use of intoned speech, a technique in which the clinician stylizes the prosody of short sentences using simple pitch and rhythm patterns. In the original MIT protocol, patients must repeat diverse sentences in order to espouse this way of speaking, with the goal of improving their natural, connected speech. MIT has long been regarded as a promising treatment but its mechanisms are still debated. Recent work showed that rhythm plays a key role in variations of MIT, leading to consider the use of pitch as relatively unnecessary in MIT. Our study primarily aimed to assess the relative contribution of rhythm and pitch in MIT's generalization effect to non-trained stimuli and to connected speech. We compared a melodic therapy (with pitch and rhythm) to a rhythmic therapy (with rhythm only) and to a normally spoken therapy (without melodic elements). Three participants with chronic post-stroke Broca's aphasia underwent the treatments in hourly sessions, 3 days per week for 6 weeks, in a cross-over design. The informativeness of connected speech, speech accuracy of trained and non-trained sentences, motor-speech agility, and mood was assessed before and after the treatments. The results show that the three treatments improved speech accuracy in trained sentences, but that the combination of rhythm and pitch elicited the strongest generalization effect both to non-trained stimuli and connected speech. No significant change was measured in motor-speech agility or mood measures with either treatment. The results emphasize the beneficial effect of both rhythm and pitch in the efficacy of original MIT on connected speech, an outcome of primary clinical importance in aphasia therapy.
    Frontiers in Human Neuroscience 01/2014; 8:592. · 2.91 Impact Factor
  • Anna Zumbansen, Isabelle Peretz, Sylvie Hébert
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a critical review of the literature on melodic intonation therapy (MIT), one of the most formalized treatments used by speech-language therapist in Broca's aphasia. We suggest basic clarifications to enhance the scientific support of this promising treatment. First, therapeutic protocols using singing as a speech facilitation technique are not necessarily MIT. The goal of MIT is to restore propositional speech. The rationale is that patients can learn a new way to speak through singing by using language-capable regions of the right cerebral hemisphere. Eventually, patients are supposed to use this way of speaking permanently but not to sing overtly. We argue that many treatment programs covered in systematic reviews on MIT's efficacy do not match MIT's therapeutic goal and rationale. Critically, we identified two main variations of MIT: the French thérapie mélodique et rythmée (TMR) that trains patients to use singing overtly as a facilitation technique in case of speech struggle and palliative versions of MIT that help patients with the most severe expressive deficits produce a limited set of useful, readymade phrases. Second, we distinguish between the immediate effect of singing on speech production and the long-term effect of the entire program on language recovery. Many results in the MIT literature can be explained by this temporal perspective. Finally, we propose that MIT can be viewed as a treatment of apraxia of speech more than aphasia. This issue should be explored in future experimental studies.
    Frontiers in Neurology 01/2014; 5:7.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Synchronizing movements with rhythmic inputs requires tight coupling of sensory and motor neural processes. Here, using a novel approach based on the recording of steady-state-evoked potentials (SS-EPs), we examine how distant brain areas supporting these processes coordinate their dynamics. The electroencephalogram was recorded while subjects listened to a 2.4-Hz auditory beat and tapped their hand on every second beat. When subjects tapped to the beat, the EEG was characterized by a 2.4-Hz SS-EP compatible with beat-related entrainment and a 1.2-Hz SS-EP compatible with movement-related entrainment, based on the results of source analysis. Most importantly, when compared with passive listening of the beat, we found evidence suggesting an interaction between sensory- and motor-related activities when subjects tapped to the beat, in the form of 1) additional SS-EP appearing at 3.6 Hz, compatible with a nonlinear product of sensorimotor integration; 2) phase coupling of beat- and movement-related activities; and 3) selective enhancement of beat-related activities over the hemisphere contralateral to the tapping, suggesting a top-down effect of movement-related activities on auditory beat processing. Taken together, our results are compatible with the view that rhythmic sensorimotor synchronization is supported by a dynamic coupling of sensory and motor related activities.
    Cerebral Cortex 10/2013; · 8.31 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We compared the electrophysiological correlates for the maintenance of non-musical tones sequences in auditory short-term memory (ASTM) to those for the short-term maintenance of sequences of coloured disks held in visual short-term memory (VSTM). The visual stimuli yielded a sustained posterior contralateral negativity (SPCN), suggesting that maintenance of sequences of coloured stimuli engaged structures similar to those involved in the maintenance of simultaneous visual displays. On the other hand, maintenance of acoustic sequences produced a sustained negativity at fronto-central sites. This component is named the Sustained Anterior Negativity (SAN). The amplitude of the SAN increased with increasing load in ASTM and predicted individual differences in performance. There was no SAN in a control condition with the same auditory stimuli but no memory task, nor one associated with visual memory. These results suggest the SAN is an index of brain activity related to the maintenance of representations in ASTM that is distinct from the maintenance of representations in VSTM.
    Neuropsychologia 08/2013; · 3.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Pitch deafness, the most commonly known form of congenital amusia, refers to a severe deficit in musical pitch processing (i.e., melody discrimination and recognition) that can leave time processing-including rhythm, metre, and "feeling the beat"-preserved. In Experiment 1, we show that by presenting musical excerpts in nonpitched drum timbres, rather than pitched piano tones, amusics show normal metre recognition. Experiment 2 reveals that body movement influences amusics' interpretation of the beat of an ambiguous drum rhythm. Experiment 3 and a subsequent exploratory study show an ability to synchronize movement to the beat of popular dance music and potential for improvement when given a modest amount of practice. Together the present results are consistent with the idea that rhythm and beat processing are spared in pitch deafness-that is, being pitch-deaf does not mean one is beat-deaf. In the context of drum music especially, amusics can be musical.
    Cognitive Neuropsychology 07/2013; 30(5):311-331. · 1.52 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: OBJECTIVE: To examine the mechanisms responsible for the reduction of the mismatch negativity (MMN) ERP component observed in response to pitch changes when the soundtrack of a movie is presented while recording the MMN. METHODS: In three experiments we measured the MMN to tones that differed in pitch from a repeated standard tone presented with a silent subtitled movie, with the soundtrack played forward or backward, or with soundtracks set at different intensity levels. RESULTS: MMN amplitude was reduced when the soundtrack was presented either forward or backward compared to the silent subtitled movie. With the soundtrack, MMN amplitude increased proportionally to the increments in the sound-to-noise intensity ratio. CONCLUSION: MMN was reduced in amplitude but had normal morphology with a concurrent soundtrack, most likely because of basic acoustical interference from the soundtrack with MMN-critical tones rather than from attentional effects. SIGNIFICANCE: A normal MMN can be recorded with a concurrent movie soundtrack, but signal amplitudes need to be set with caution to ensure a sufficiently high sound-to-noise ratio between MMN stimuli and the soundtrack.
    Clinical neurophysiology: official journal of the International Federation of Clinical Neurophysiology 06/2013; · 3.12 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Congenital amusia, a neurogenetic disorder, affects primarily pitch and melody perception. Here we test the hypothesis that amusics suffer from impaired access to spectro-temporal fine-structure cues associated with low-order resolved harmonics. The hypothesis is motivated by the fact that tones containing only unresolved harmonics result in poorer pitch sensitivity in normal-hearing listeners. F0DLs were measured in amusics and matched controls for harmonic complexes containing either resolved or unresolved harmonics. Sensitivity to temporal-fine-structure was assessed via interaural-time-difference (ITD) thresholds, intensity resolution was probed via interaural-level-difference (ILD) thresholds and intensity difference limens, and spectral resolution was estimated using the notched-noise method. As expected, F0DLs were elevated in amusics for resolved harmonics; however, no difference between amusics and controls was found for F0DLs using unresolved harmonics. The deficit appears unlikely to be due to temporal-fine-structure coding, as ITD thresholds were unimpaired in the amusic group. In addition, no differences were found between the two groups in ILD thresholds, intensity difference limens, or auditory-filter bandwidths. Overall the results suggest a pitch-specific deficit in fine spectro-temporal information processing in amusia that cannot be ascribed to defective temporal-fine-structure or spectral encoding in the auditory periphery. [Work supported by Fyssen Foundation, Erasmus Mundus, CIHR, and NIH grant R01DC05216.].
    The Journal of the Acoustical Society of America 05/2013; 133(5):3381. · 1.65 Impact Factor
  • Patricia Moreau, Pierre Jolicœur, Isabelle Peretz
    [Show abstract] [Hide abstract]
    ABSTRACT: Congenital amusia is a lifelong disorder characterized by a difficulty in perceiving and producing music despite normal intelligence and hearing. Behavioral data have indicated that it originates from a deficit in fine-grained pitch discrimination, and is expressed by the absence of a P3b event-related brain response for pitch differences smaller than a semitone and a bigger N2b-P3b brain response for large pitch differences as compared to controls. However, it is still unclear why the amusic brain overreacts to large pitch changes. Furthermore, another electrophysiological study indicates that the amusic brain can respond to changes in melodies as small as a quarter-tone, without awareness, by exhibiting a normal mismatch negativity (MMN) brain response. Here, we re-examine the event-related N2b-P3b components with the aim to clarify the cause of the larger amplitude observed by Peretz, Brattico, and Tervaniemi (2005), by experimentally matching the number of deviants presented to the controls according to the number of deviants detected by amusics. We also re-examine the MMN component as well as the N1 in an acoustical context to investigate further the pitch discrimination deficit underlying congenital amusia. In two separate conditions, namely ignore and attend, we measured the MMN, the N1, the N2b and the P3b to tones that deviated by an eight of a tone (25 cents) or whole tone (200 cents) from a repeated standard tone. The results show a normal MMN, a seemingly normal N1, a normal P3b for the 200 cents pitch deviance, and no P3b for the small 25 cents pitch differences in amusics. These results indicate that the amusic brain responds to small pitch differences at a pre-attentive level of perception, but is unable to detect consciously those same pitch deviances at a later attentive level. The results are consistent with previous MRI and fMRI studies indicating that the auditory cortex of amusic individuals is functioning normally.
    Brain and Cognition 04/2013; 81(3):337-44. · 2.82 Impact Factor
  • Sean Hutchins, Isabelle Peretz
    [Show abstract] [Hide abstract]
    ABSTRACT: We tested whether congenital amusics, who exhibit pitch perception deficits, nevertheless adjust the pitch of their voice in response to a sudden pitch shift applied to vocal feedback. Nine amusics and matched controls imitated their own previously-recorded speech or singing, while the online feedback they received was shifted mid-utterance by 25 or 200cents. While a few amusics failed to show pitch-shift effects, a majority showed a pitch-shift response and nearly half showed a normal response to both large and small shifts, with similar magnitudes and response times as controls. The size and presence of the shift response to small shifts were significantly predicted by participants' vocal pitch matching accuracy, rather than their ability to perceive small pitch changes. The observed dissociation between the ability to consciously perceive small pitch changes and to produce and monitor vocal pitch provides evidence for a dual-route model of pitch processing in the brain.
    Brain and Language 03/2013; 125(1):106-117. · 3.39 Impact Factor
  • William Aubé, Isabelle Peretz, Jorge L Armony
    [Show abstract] [Hide abstract]
    ABSTRACT: Music is a powerful tool for communicating emotions which can elicit memories through associative mechanisms. However, it is currently unknown whether emotion can modulate memory for music without reference to a context or personal event. We conducted three experiments to investigate the effect of basic emotions (fear, happiness, and sadness) on recognition memory for music, using short, novel stimuli explicitly created for research purposes, and compared them with nonlinguistic vocalisations. Results showed better memory accuracy for musical clips expressing fear and, to some extent, happiness. In the case of nonlinguistic vocalisations we confirmed a memory advantage for all emotions tested. A correlation between memory accuracy for music and vocalisations was also found, particularly in the case of fearful expressions. These results confirm that emotional expressions, particularly fearful ones, conveyed by music can influence memory as has been previously shown for other forms of expressions, such as faces and vocalisations.
    Memory 02/2013; · 2.09 Impact Factor

Publication Stats

6k Citations
865.03 Total Impact Points

Institutions

  • 2007–2014
    • McGill University
      • Department of Music Research
      Montréal, Quebec, Canada
    • Montreal Heart Institute
      Montréal, Quebec, Canada
  • 1995–2014
    • Université du Québec à Montréal
      • Department of Psychology
      Montréal, Quebec, Canada
  • 1987–2014
    • Université de Montréal
      • • Department of Psychology
      • • International Laboratory for Brain, Music and Sound Research
      Montréal, Quebec, Canada
  • 2013
    • University of Melbourne
      • Melbourne School of Psychological Sciences
      Melbourne, Victoria, Australia
  • 2011–2013
    • Catholic University of Louvain
      • Institute of Neuroscience
      Walloon Region, Belgium
    • University of Lille Nord de France
      Lille, Nord-Pas-de-Calais, France
    • University of Franche-Comté
      Becoinson, Franche-Comté, France
  • 2012
    • Northwestern University
      • Roxelyn and Richard Pepper Department of Communication Sciences and Disorders
      Evanston, IL, United States
    • Université de Montpellier 1
      Montpelhièr, Languedoc-Roussillon, France
  • 2011–2012
    • Lyon Neuroscience Research Center
      Lyons, Rhône-Alpes, France
  • 2010
    • University of Lyon
      Lyons, Rhône-Alpes, France
    • Beijing Normal University
      Peping, Beijing, China
    • Université Victor Segalen Bordeaux 2
      Burdeos, Aquitaine, France
    • Aristotle University of Thessaloniki
      • Τμήμα Ψυχολογίας
      Thessaloníki, Kentriki Makedonia, Greece
  • 1980–2010
    • Université Libre de Bruxelles
      • Research Centre of Cognition and Neurosciences (CRNC)
      Bruxelles, Brussels Capital Region, Belgium
  • 2009
    • Fonds de la Recherche Scientifique (FNRS)
      Bruxelles, Brussels Capital Region, Belgium
  • 2005–2009
    • Wyższa Szkoła Finansów i Zarządzania w Warszawie
      • Department of Cognitive Psychology
      Warszawa, Masovian Voivodeship, Poland
  • 2003–2009
    • University of Helsinki
      • Department of Psychology
      Helsinki, Province of Southern Finland, Finland
  • 2008
    • American University Washington D.C.
      Washington, Washington, D.C., United States
  • 2006
    • Keele University
      • School of Psychology
      Newcastle under Lyme, ENG, United Kingdom
    • Université Charles-de-Gaulle Lille 3
      Lille, Nord-Pas-de-Calais, France