Isabelle Peretz

McGill University, Montréal, Quebec, Canada

Are you Isabelle Peretz?

Claim your profile

Publications (220)871.18 Total impact

  • Source
    Caroline Palmer, Pascale Lidji, Isabelle Peretz
    [Show abstract] [Hide abstract]
    ABSTRACT: Tapping or clapping to an auditory beat, an easy task for most individuals, reveals precise temporal synchronization with auditory patterns such as music, even in the presence of temporal fluctuations. Most models of beat-tracking rely on the theoretical concept of pulse: a perceived regular beat generated by an internal oscillation that forms the foundation of entrainment abilities. Although tapping to the beat is a natural sensorimotor activity for most individuals, not everyone can track an auditory beat. Recently, the case of Mathieu was documented (Phillips-Silver et al. 2011 Neuropsychologia 49, 961-969. (doi:10.1016/j.neuropsychologia.2011.02.002)). Mathieu presented himself as having difficulty following a beat and exhibited synchronization failures. We examined beat-tracking in normal control participants, Mathieu, and a second beat-deaf individual, who tapped with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Both beat-deaf cases exhibited failures in error correction in response to the perturbation task while exhibiting normal spontaneous motor tempi (in the absence of an auditory stimulus), supporting a deficit specific to perception-action coupling. A damped harmonic oscillator model was applied to the temporal adaptation responses; the model's parameters of relaxation time and endogenous frequency accounted for differences between the beat-deaf cases as well as the control group individuals.
    Philosophical transactions of the Royal Society of London. Series B, Biological sciences. 12/2014; 369(1658).
  • Source
    Caroline Palmer, Pascale Lidji, Isabelle Peretz
  • Source
    Caroline Palmer, Pascale Lidji, Isabelle Peretz
  • Source
    Caroline Palmer, Pascale Lidji, Isabelle Peretz
  • [Show abstract] [Hide abstract]
    ABSTRACT: Pitch plays a fundamental role in audition, from speech and music perception to auditory scene analysis. Congenital amusia is a neurogenetic disorder that appears to affect primarily pitch and melody perception. Pitch is normally conveyed by the spectro-temporal fine structure of low harmonics, but some pitch information is available in the temporal envelope produced by the interactions of higher harmonics. Using 10 amusic subjects and 10 matched controls, we tested the hypothesis that amusics suffer exclusively from impaired processing of spectro-temporal fine structure. We also tested whether the inability of amusics to process acoustic temporal fine structure extends beyond pitch by measuring sensitivity to interaural time differences, which also rely on temporal fine structure. Further tests were carried out on basic intensity and spectral resolution. As expected, pitch perception based on spectro-temporal fine structure was impaired in amusics; however, no significant deficits were observed in amusics' ability to perceive the pitch conveyed via temporal-envelope cues. Sensitivity to interaural time differences was also not significantly different between the amusic and control groups, ruling out deficits in the peripheral coding of temporal fine structure. Finally, no significant differences in intensity or spectral resolution were found between the amusic and control groups. The results demonstrate a pitch-specific deficit in fine spectro-temporal information processing in amusia that seems unrelated to temporal or spectral coding in the auditory periphery. These results are consistent with the view that there are distinct mechanisms dedicated to processing resolved and unresolved harmonics in the general population, the former being altered in congenital amusia while the latter is spared.
    Neuropsychologia 11/2014; · 3.48 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Musicians have enhanced auditory processing abilities. In some studies, these abilities are paralleled by an improved understanding of speech in noisy environments, because in part of more robust encoding of speech signals in noise at the level of the brainstem. Little is known about the impact of musicianship on attention-dependent cortical activity related to lexical access during a speech-in-noise task. To address this issue, we presented musicians and nonmusicians with single words mixed with three levels of background noise, across two conditions while monitoring electrical brain activity. In the active condition, listeners repeated the words aloud, and in the passive condition, they ignored the words and watched a silent film. When background noise was most intense, musicians repeated more words correctly compared with nonmusicians. Auditory evoked responses were attenuated and delayed with the addition of background noise. In musicians, P1 amplitude was marginally enhanced during active listening and was related to task performance in the most difficult listening condition. By comparing ERPs from the active and passive conditions, we isolated an N400 related to lexical access. The amplitude of the N400 was not influenced by the level of background noise in musicians, whereas N400 amplitude increased with the level of background noise in nonmusicians. In nonmusicians, the increase in N400 amplitude was related to a reduction in task performance. In musicians only, there was a rightward shift of the sources contributing to the N400 as the level of background noise increased. This pattern of results supports the hypothesis that encoding of speech in noise is more robust in musicians and suggests that this facilitates lexical access. Moreover, the shift in sources suggests that musicians, to a greater extent than nonmusicians, may increasingly rely on acoustic cues to understand speech in noise.
    Journal of Cognitive Neuroscience 11/2014; · 4.49 Impact Factor
  • Source
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The inability to vocally match a pitch can be caused by poor pitch perception or by poor vocal-motor control. Although previous studies have tried to examine the relationship between pitch perception and vocal production, they have failed to control for the timbre of the target to be matched. In the present study, we compare pitch-matching accuracy with an unfamiliar instrument (the slider) and with the voice, designed such that the slider plays back recordings of the participant's own voice. We also measured pitch accuracy in singing a familiar melody ("Happy Birthday") to assess the relationship between single-pitch-matching tasks and melodic singing. Our results showed that participants (all nonmusicians) were significantly better at matching recordings of their own voices with the slider than with their voice, indicating that vocal-motor control is an important limiting factor on singing ability. We also found significant correlations between the ability to sing a melody in tune and vocal pitch matching, but not pitch matching on the slider. Better melodic singers also tended to have higher quality voices (as measured by acoustic variables). These results provide important evidence about the role of vocal-motor control in poor singing ability and demonstrate that single-pitch-matching tasks can be useful in measuring general singing abilities.
    Attention Perception & Psychophysics 07/2014; · 1.97 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Previous studies have suggested that presenting to-be-memorised lyrics in a singing mode, instead of a speaking mode, may facilitate learning and retention in normal adults. In this study, seven healthy older adults and eight participants with mild Alzheimer's disease (AD) learned and memorised lyrics that were either sung or spoken. We measured the percentage of words recalled from these lyrics immediately and after 10 minutes. Moreover, in AD participants, we tested the effect of successive learning episodes for one spoken and one sung excerpt, as well as long-term retention after a four week delay. Sung conditions did not influence lyrics recall in immediate recall but increased delayed recall for both groups. In AD, learning slopes for sung and spoken lyrics did not show a significant difference across successive learning episodes. However, sung lyrics showed a slight advantage over spoken ones after a four week delay. These results suggest that singing may increase the load of initial learning but improve long-term retention of newly acquired verbal information. We further propose some recommendations on how to maximise these effects and make them relevant for therapeutic applications.
    Neuropsychological Rehabilitation 06/2014; · 2.01 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Intrinsic emotional expressions such as those communicated by faces and vocalizations have been shown to engage specific brain regions, such as the amygdala. Although music constitutes another powerful means to express emotions, the neural substrates involved in its processing remain poorly understood. In particular, it is unknown whether brain regions typically associated with processing "biologically-relevant" emotional expressions are also recruited by emotional music. To address this question, we conducted an event-related fMRI study in 47 healthy volunteers in which we directly compared responses to basic emotions (fear, sadness and happiness, as well as neutral) expressed through faces, nonlinguistic vocalizations and short, novel musical excerpts. Our results confirmed the importance of fear in emotional communication, as revealed by significant BOLD signal increased in a cluster within the posterior amygdala and anterior hippocampus, as well as in the posterior insula across all three domains. Moreover, subject-specific amygdala responses to fearful music and vocalizations were correlated, consistent with the proposal that the brain circuitry involved in the processing of musical emotions might be shared with the one that have evolved for vocalizations. Overall, our results show that processing of fear expressed through music, engages some of the same brain areas known to be crucial for detecting and evaluating threat-related information.
    Social Cognitive and Affective Neuroscience 05/2014; · 5.04 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We used magnetoencephalography (MEG) to examine brain activity related to the maintenance of non-verbal pitch information in auditory short-term memory (ASTM). We focused on brain activity that increased with the number of items effectively held in memory by the participants during the retention interval of an auditory memory task. We used very simple acoustic materials (i.e., pure tones that varied in pitch) that minimized activation from non-ASTM related systems. MEG revealed neural activity in frontal, temporal, and parietal cortex that increased with a greater number of items effectively held in memory by the participants during the maintenance of pitch representations in ASTM. The present results reinforce the functional role of frontal and temporal cortex in the retention of pitch information in ASTM. This is the first MEG study to provide both fine spatial localization and temporal resolution on the neural mechanisms of non-verbal ASTM for pitch in relation to individual differences in the capacity of ASTM. This research contributes to a comprehensive understanding of the mechanisms mediating the representation and maintenance of basic non-verbal auditory features in the human brain.
    NeuroImage 03/2014; · 6.25 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Strong links between music and motor functions suggest that music could represent an interesting aid for motor learning. The present study aims for the first time to test the potential of music to assist in the learning of sequences of gestures in normal and pathological aging. Participants with mild Alzheimer's disease (AD) and healthy older adults (controls) learned sequences of meaningless gestures that were either accompanied by music or a metronome. We also manipulated the learning procedure such that participants had to imitate the gestures to-be-memorized in synchrony with the experimenter or after the experimenter during encoding. Results show different patterns of performance for the two groups. Overall, musical accompaniment had no impact on the controls' performance but improved those of AD participants. Conversely, synchronization of gestures during learning helped controls but seemed to interfere with retention in AD. We discuss these findings regarding their relevance for a better understanding of auditory-motor memory, and we propose recommendations to maximize the mnemonic effect of music for motor sequence learning for dementia care.
    Frontiers in Human Neuroscience 01/2014; 8:294. · 2.91 Impact Factor
  • Source
    Dawn L Merrett, Isabelle Peretz, Sarah J Wilson
    [Show abstract] [Hide abstract]
    ABSTRACT: Singing has been used in language rehabilitation for decades, yet controversy remains over its effectiveness and mechanisms of action. Melodic Intonation Therapy (MIT) is the most well-known singing-based therapy; however, speculation surrounds when and how it might improve outcomes in aphasia and other language disorders. While positive treatment effects have been variously attributed to different MIT components, including melody, rhythm, hand-tapping, and the choral nature of the singing, there is uncertainty about the components that are truly necessary and beneficial. Moreover, the mechanisms by which the components operate are not well understood. Within the literature to date, proposed mechanisms can be broadly grouped into four categories: (1) neuroplastic reorganization of language function, (2) activation of the mirror neuron system and multimodal integration, (3) utilization of shared or specific features of music and language, and (4) motivation and mood. In this paper, we review available evidence for each mechanism and propose that these mechanisms are not mutually exclusive, but rather represent different levels of explanation, reflecting the neurobiological, cognitive, and emotional effects of MIT. Thus, instead of competing, each of these mechanisms may contribute to language rehabilitation, with a better understanding of their relative roles and interactions allowing the design of protocols that maximize the effectiveness of singing therapy for aphasia.
    Frontiers in Human Neuroscience 01/2014; 8:401. · 2.91 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Music is an integral part of the cultural heritage of all known human societies, with the capacity for music perception and production present in most people. Researchers generally agree that both genetic and environmental factors contribute to the broader realization of music ability, with the degree of music aptitude varying, not only from individual to individual, but across various components of music ability within the same individual. While environmental factors influencing music development and expertise have been well investigated in the psychological and music literature, the interrogation of possible genetic influences has not progressed at the same rate. Recent advances in genetic research offer fertile ground for exploring the genetic basis of music ability. This paper begins with a brief overview of behavioral and molecular genetic approaches commonly used in human genetic analyses, and then critically reviews the key findings of genetic investigations of the components of music ability. Some promising and converging findings have emerged, with several loci on chromosome 4 implicated in singing and music perception, and certain loci on chromosome 8q implicated in absolute pitch and music perception. The gene AVPR1A on chromosome 12q has also been implicated in music perception, music memory, and music listening, whereas SLC6A4 on chromosome 17q has been associated with music memory and choir participation. Replication of these results in alternate populations and with larger samples is warranted to confirm the findings. Through increased research efforts, a clearer picture of the genetic mechanisms underpinning music ability will hopefully emerge.
    Frontiers in Psychology 01/2014; 5:658. · 2.80 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Music and speech are two of the most relevant and common sounds in the human environment. Perceiving and processing these two complex acoustical signals rely on a hierarchical functional network distributed throughout several brain regions within and beyond the auditory cortices. Given their similarities, the neural bases for processing these two complex sounds overlap to a certain degree, but particular brain regions may show selectivity for one or the other acoustic category, which we aimed to identify. We examined 53 subjects (28 of them professional musicians) by functional magnetic resonance imaging (fMRI), using a paradigm designed to identify regions showing increased activity in response to different types of musical stimuli, compared to different types of complex sounds, such as speech and non-linguistic vocalizations. We found a region in the anterior portion of the superior temporal gyrus (planum polare) that showed preferential activity in response to musical stimuli and was present in all our subjects, regardless of musical training, and invariant across different musical instruments (violin, piano or synthetic piano). Our data show that this cortical region is preferentially involved in processing musical, as compared to other complex sounds, suggesting a functional role as a second-order relay, possibly integrating acoustic characteristics intrinsic to music (e.g., melody extraction). Moreover, we assessed whether musical experience modulates the response of cortical regions involved in music processing and found evidence of functional differences between musicians and non-musicians during music listening. In particular, bilateral activation of the planum polare was more prevalent, but not exclusive, in musicians than non-musicians, and activation of the right posterior portion of the superior temporal gyrus (planum temporale) differed between groups. Our results provide evidence of functional specialization for music processing in specific regions of the auditory cortex and show domain-specific functional differences possibly correlated with musicianship.
    Cortex 01/2014; · 6.16 Impact Factor
  • Source
    Anna Zumbansen, Isabelle Peretz, Sylvie Hébert
    [Show abstract] [Hide abstract]
    ABSTRACT: Melodic intonation therapy (MIT) is a structured protocol for language rehabilitation in people with Broca's aphasia. The main particularity of MIT is the use of intoned speech, a technique in which the clinician stylizes the prosody of short sentences using simple pitch and rhythm patterns. In the original MIT protocol, patients must repeat diverse sentences in order to espouse this way of speaking, with the goal of improving their natural, connected speech. MIT has long been regarded as a promising treatment but its mechanisms are still debated. Recent work showed that rhythm plays a key role in variations of MIT, leading to consider the use of pitch as relatively unnecessary in MIT. Our study primarily aimed to assess the relative contribution of rhythm and pitch in MIT's generalization effect to non-trained stimuli and to connected speech. We compared a melodic therapy (with pitch and rhythm) to a rhythmic therapy (with rhythm only) and to a normally spoken therapy (without melodic elements). Three participants with chronic post-stroke Broca's aphasia underwent the treatments in hourly sessions, 3 days per week for 6 weeks, in a cross-over design. The informativeness of connected speech, speech accuracy of trained and non-trained sentences, motor-speech agility, and mood was assessed before and after the treatments. The results show that the three treatments improved speech accuracy in trained sentences, but that the combination of rhythm and pitch elicited the strongest generalization effect both to non-trained stimuli and connected speech. No significant change was measured in motor-speech agility or mood measures with either treatment. The results emphasize the beneficial effect of both rhythm and pitch in the efficacy of original MIT on connected speech, an outcome of primary clinical importance in aphasia therapy.
    Frontiers in Human Neuroscience 01/2014; 8:592. · 2.91 Impact Factor
  • Source
    Anna Zumbansen, Isabelle Peretz, Sylvie Hébert
    [Show abstract] [Hide abstract]
    ABSTRACT: We present a critical review of the literature on melodic intonation therapy (MIT), one of the most formalized treatments used by speech-language therapist in Broca's aphasia. We suggest basic clarifications to enhance the scientific support of this promising treatment. First, therapeutic protocols using singing as a speech facilitation technique are not necessarily MIT. The goal of MIT is to restore propositional speech. The rationale is that patients can learn a new way to speak through singing by using language-capable regions of the right cerebral hemisphere. Eventually, patients are supposed to use this way of speaking permanently but not to sing overtly. We argue that many treatment programs covered in systematic reviews on MIT's efficacy do not match MIT's therapeutic goal and rationale. Critically, we identified two main variations of MIT: the French thérapie mélodique et rythmée (TMR) that trains patients to use singing overtly as a facilitation technique in case of speech struggle and palliative versions of MIT that help patients with the most severe expressive deficits produce a limited set of useful, readymade phrases. Second, we distinguish between the immediate effect of singing on speech production and the long-term effect of the entire program on language recovery. Many results in the MIT literature can be explained by this temporal perspective. Finally, we propose that MIT can be viewed as a treatment of apraxia of speech more than aphasia. This issue should be explored in future experimental studies.
    Frontiers in Neurology 01/2014; 5:7.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Synchronizing movements with rhythmic inputs requires tight coupling of sensory and motor neural processes. Here, using a novel approach based on the recording of steady-state-evoked potentials (SS-EPs), we examine how distant brain areas supporting these processes coordinate their dynamics. The electroencephalogram was recorded while subjects listened to a 2.4-Hz auditory beat and tapped their hand on every second beat. When subjects tapped to the beat, the EEG was characterized by a 2.4-Hz SS-EP compatible with beat-related entrainment and a 1.2-Hz SS-EP compatible with movement-related entrainment, based on the results of source analysis. Most importantly, when compared with passive listening of the beat, we found evidence suggesting an interaction between sensory- and motor-related activities when subjects tapped to the beat, in the form of 1) additional SS-EP appearing at 3.6 Hz, compatible with a nonlinear product of sensorimotor integration; 2) phase coupling of beat- and movement-related activities; and 3) selective enhancement of beat-related activities over the hemisphere contralateral to the tapping, suggesting a top-down effect of movement-related activities on auditory beat processing. Taken together, our results are compatible with the view that rhythmic sensorimotor synchronization is supported by a dynamic coupling of sensory and motor related activities.
    Cerebral Cortex 10/2013; · 8.31 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We compared the electrophysiological correlates for the maintenance of non-musical tones sequences in auditory short-term memory (ASTM) to those for the short-term maintenance of sequences of coloured disks held in visual short-term memory (VSTM). The visual stimuli yielded a sustained posterior contralateral negativity (SPCN), suggesting that maintenance of sequences of coloured stimuli engaged structures similar to those involved in the maintenance of simultaneous visual displays. On the other hand, maintenance of acoustic sequences produced a sustained negativity at fronto-central sites. This component is named the Sustained Anterior Negativity (SAN). The amplitude of the SAN increased with increasing load in ASTM and predicted individual differences in performance. There was no SAN in a control condition with the same auditory stimuli but no memory task, nor one associated with visual memory. These results suggest the SAN is an index of brain activity related to the maintenance of representations in ASTM that is distinct from the maintenance of representations in VSTM.
    Neuropsychologia 08/2013; · 3.48 Impact Factor

Publication Stats

6k Citations
871.18 Total Impact Points

Institutions

  • 2007–2014
    • McGill University
      • Department of Music Research
      Montréal, Quebec, Canada
    • Montreal Heart Institute
      Montréal, Quebec, Canada
  • 1995–2014
    • Université du Québec à Montréal
      • Department of Psychology
      Montréal, Quebec, Canada
  • 1987–2014
    • Université de Montréal
      • • Department of Psychology
      • • International Laboratory for Brain, Music and Sound Research
      Montréal, Quebec, Canada
  • 2013
    • University of Melbourne
      • Melbourne School of Psychological Sciences
      Melbourne, Victoria, Australia
  • 2011–2013
    • Catholic University of Louvain
      • Institute of Neuroscience
      Walloon Region, Belgium
    • University of Lille Nord de France
      Lille, Nord-Pas-de-Calais, France
    • University of Franche-Comté
      Becoinson, Franche-Comté, France
  • 2012
    • Northwestern University
      • Roxelyn and Richard Pepper Department of Communication Sciences and Disorders
      Evanston, IL, United States
    • Université de Montpellier 1
      Montpelhièr, Languedoc-Roussillon, France
  • 2011–2012
    • Lyon Neuroscience Research Center
      Lyons, Rhône-Alpes, France
  • 2010
    • Beijing Normal University
      Peping, Beijing, China
    • Université Victor Segalen Bordeaux 2
      Burdeos, Aquitaine, France
    • Aristotle University of Thessaloniki
      • Τμήμα Ψυχολογίας
      Thessaloníki, Kentriki Makedonia, Greece
  • 2007–2010
    • University of Lyon
      Lyons, Rhône-Alpes, France
  • 1980–2010
    • Université Libre de Bruxelles
      • Research Centre of Cognition and Neurosciences (CRNC)
      Bruxelles, Brussels Capital Region, Belgium
  • 2009
    • Fonds de la Recherche Scientifique (FNRS)
      Bruxelles, Brussels Capital Region, Belgium
  • 2005–2009
    • Wyższa Szkoła Finansów i Zarządzania w Warszawie
      • Department of Cognitive Psychology
      Warszawa, Masovian Voivodeship, Poland
  • 2003–2009
    • University of Helsinki
      • Department of Psychology
      Helsinki, Province of Southern Finland, Finland
  • 2008
    • American University Washington D.C.
      Washington, Washington, D.C., United States
  • 2006
    • Keele University
      • School of Psychology
      Newcastle under Lyme, ENG, United Kingdom
    • Université Charles-de-Gaulle Lille 3
      Lille, Nord-Pas-de-Calais, France