Isabelle Peretz

McGill University, Montréal, Quebec, Canada

Are you Isabelle Peretz?

Claim your profile

Publications (223)921.34 Total impact

  • [Show abstract] [Hide abstract]
    ABSTRACT: Advances in molecular technologies make it possible to pinpoint genomic factors associated with complex human traits. For cognition and behaviour, identification of underlying genes provides new entry points for deciphering the key neurobiological pathways. In the past decade, the search for genetic correlates of musicality has gained traction. Reports have documented familial clustering for different extremes of ability, including amusia and absolute pitch (AP), with twin studies demonstrating high heritability for some music-related skills, such as pitch perception. Certain chromosomal regions have been linked to AP and musical aptitude, while individual candidate genes have been investigated in relation to aptitude and creativity. Most recently, researchers in this field started performing genome-wide association scans. Thus far, studies have been hampered by relatively small sample sizes and limitations in defining components of musicality, including an emphasis on skills that can only be assessed in trained musicians. With opportunities to administer standardized aptitude tests online, systematic large-scale assessment of musical abilities is now feasible, an important step towards high-powered genome-wide screens. Here, we offer a synthesis of existing literatures and outline concrete suggestions for the development of comprehensive operational tools for the analysis of musical phenotypes. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
    Philosophical transactions of the Royal Society of London. Series B, Biological sciences. 03/2015; 370(1664-1664).
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Musicality can be defined as a natural, spontaneously developing trait based on and constrained by biology and cognition. Music, by contrast, can be defined as a social and cultural construct based on that very musicality. One critical challenge is to delineate the constituent elements of musicality. What biological and cognitive mechanisms are essential for perceiving, appreciating and making music? Progress in understanding the evolution of music cognition depends upon adequate characterization of the constituent mechanisms of musicality and the extent to which they are present in non-human species. We argue for the importance of identifying these mechanisms and delineating their functions and developmental course, as well as suggesting effective means of studying them in human and non-human animals. It is virtually impossible to underpin the evolutionary role of musicality as a whole, but a multicomponent perspective on musicality that emphasizes its constituent capacities, development and neural cognitive specificity is an excellent starting point for a research programme aimed at illuminating the origins and evolution of musical behaviour as an autonomous trait. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
    Philosophical Transactions of The Royal Society B Biological Sciences 03/2015; 370(1664-1664):1-8. · 6.31 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Neural overlap in processing music and speech, as measured by the co-activation of brain regions in neuroimaging studies, may suggest that parts of the neural circuitries established for language may have been recycled during evolution for musicality, or vice versa that musicality served as a springboard for language emergence. Such a perspective has important implications for several topics of general interest besides evolutionary origins. For instance, neural overlap is an important premise for the possibility of music training to influence language acquisition and literacy. However, neural overlap in processing music and speech does not entail sharing neural circuitries. Neural separability between music and speech may occur in overlapping brain regions. In this paper, we review the evidence and outline the issues faced in interpreting such neural data, and argue that converging evidence from several methodologies is needed before neural overlap is taken as evidence of sharing. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
    Philosophical Transactions of The Royal Society B Biological Sciences 03/2015; 370(1664). · 6.31 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cochlear implant users show a profile of residual, yet poorly understood, musical abilities. An ability that has received little to no attention in this population is entrainment to a musical beat. We show for the first time that a heterogeneous group of cochlear implant users is able to find the beat and move their bodies in time to Latin Merengue music, especially when the music is presented in unpitched drum tones. These findings not only reveal a hidden capacity for feeling musical rhythm through the body in the deaf and hearing impaired population, but illuminate promising avenues for designing early childhood musical training that can engage implanted children in social musical activities with benefits potentially extending to non-musical domains.
    Hearing Research. 01/2015;
  • Source
    Caroline Palmer, Pascale Lidji, Isabelle Peretz
    [Show abstract] [Hide abstract]
    ABSTRACT: Tapping or clapping to an auditory beat, an easy task for most individuals, reveals precise temporal synchronization with auditory patterns such as music, even in the presence of temporal fluctuations. Most models of beat-tracking rely on the theoretical concept of pulse: a perceived regular beat generated by an internal oscillation that forms the foundation of entrainment abilities. Although tapping to the beat is a natural sensorimotor activity for most individuals, not everyone can track an auditory beat. Recently, the case of Mathieu was documented (Phillips-Silver et al. 2011 Neuropsychologia 49, 961-969. (doi:10.1016/j.neuropsychologia.2011.02.002)). Mathieu presented himself as having difficulty following a beat and exhibited synchronization failures. We examined beat-tracking in normal control participants, Mathieu, and a second beat-deaf individual, who tapped with an auditory metronome in which unpredictable perturbations were introduced to disrupt entrainment. Both beat-deaf cases exhibited failures in error correction in response to the perturbation task while exhibiting normal spontaneous motor tempi (in the absence of an auditory stimulus), supporting a deficit specific to perception-action coupling. A damped harmonic oscillator model was applied to the temporal adaptation responses; the model's parameters of relaxation time and endogenous frequency accounted for differences between the beat-deaf cases as well as the control group individuals.
    Philosophical Transactions of The Royal Society B Biological Sciences 12/2014; 369(1658). · 6.31 Impact Factor
  • Source
    Caroline Palmer, Pascale Lidji, Isabelle Peretz
  • Source
    Caroline Palmer, Pascale Lidji, Isabelle Peretz
  • Source
    Caroline Palmer, Pascale Lidji, Isabelle Peretz
  • [Show abstract] [Hide abstract]
    ABSTRACT: Pitch plays a fundamental role in audition, from speech and music perception to auditory scene analysis. Congenital amusia is a neurogenetic disorder that appears to affect primarily pitch and melody perception. Pitch is normally conveyed by the spectro-temporal fine structure of low harmonics, but some pitch information is available in the temporal envelope produced by the interactions of higher harmonics. Using 10 amusic subjects and 10 matched controls, we tested the hypothesis that amusics suffer exclusively from impaired processing of spectro-temporal fine structure. We also tested whether the inability of amusics to process acoustic temporal fine structure extends beyond pitch by measuring sensitivity to interaural time differences, which also rely on temporal fine structure. Further tests were carried out on basic intensity and spectral resolution. As expected, pitch perception based on spectro-temporal fine structure was impaired in amusics; however, no significant deficits were observed in amusics' ability to perceive the pitch conveyed via temporal-envelope cues. Sensitivity to interaural time differences was also not significantly different between the amusic and control groups, ruling out deficits in the peripheral coding of temporal fine structure. Finally, no significant differences in intensity or spectral resolution were found between the amusic and control groups. The results demonstrate a pitch-specific deficit in fine spectro-temporal information processing in amusia that seems unrelated to temporal or spectral coding in the auditory periphery. These results are consistent with the view that there are distinct mechanisms dedicated to processing resolved and unresolved harmonics in the general population, the former being altered in congenital amusia while the latter is spared.
    Neuropsychologia 11/2014; · 3.45 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Musicians have enhanced auditory processing abilities. In some studies, these abilities are paralleled by an improved understanding of speech in noisy environments, because in part of more robust encoding of speech signals in noise at the level of the brainstem. Little is known about the impact of musicianship on attention-dependent cortical activity related to lexical access during a speech-in-noise task. To address this issue, we presented musicians and nonmusicians with single words mixed with three levels of background noise, across two conditions while monitoring electrical brain activity. In the active condition, listeners repeated the words aloud, and in the passive condition, they ignored the words and watched a silent film. When background noise was most intense, musicians repeated more words correctly compared with nonmusicians. Auditory evoked responses were attenuated and delayed with the addition of background noise. In musicians, P1 amplitude was marginally enhanced during active listening and was related to task performance in the most difficult listening condition. By comparing ERPs from the active and passive conditions, we isolated an N400 related to lexical access. The amplitude of the N400 was not influenced by the level of background noise in musicians, whereas N400 amplitude increased with the level of background noise in nonmusicians. In nonmusicians, the increase in N400 amplitude was related to a reduction in task performance. In musicians only, there was a rightward shift of the sources contributing to the N400 as the level of background noise increased. This pattern of results supports the hypothesis that encoding of speech in noise is more robust in musicians and suggests that this facilitates lexical access. Moreover, the shift in sources suggests that musicians, to a greater extent than nonmusicians, may increasingly rely on acoustic cues to understand speech in noise.
    Journal of Cognitive Neuroscience 11/2014; · 4.69 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Music and speech are two of the most relevant and common sounds in the human environment. Perceiving and processing these two complex acoustical signals rely on a hierarchical functional network distributed throughout several brain regions within and beyond the auditory cortices. Given their similarities, the neural bases for processing these two complex sounds overlap to a certain degree, but particular brain regions may show selectivity for one or the other acoustic category, which we aimed to identify. We examined 53 subjects (28 of them professional musicians) by functional magnetic resonance imaging (fMRI), using a paradigm designed to identify regions showing increased activity in response to different types of musical stimuli, compared to different types of complex sounds, such as speech and non-linguistic vocalizations. We found a region in the anterior portion of the superior temporal gyrus (planum polare) that showed preferential activity in response to musical stimuli and was present in all our subjects, regardless of musical training, and invariant across different musical instruments (violin, piano or synthetic piano). Our data show that this cortical region is preferentially involved in processing musical, as compared to other complex sounds, suggesting a functional role as a second-order relay, possibly integrating acoustic characteristics intrinsic to music (e.g., melody extraction). Moreover, we assessed whether musical experience modulates the response of cortical regions involved in music processing and found evidence of functional differences between musicians and non-musicians during music listening. In particular, bilateral activation of the planum polare was more prevalent, but not exclusive, in musicians than non-musicians, and activation of the right posterior portion of the superior temporal gyrus (planum temporale) differed between groups. Our results provide evidence of functional specialization for music processing in specific regions of the auditory cortex and show domain-specific functional differences possibly correlated with musicianship.
    Cortex 10/2014; · 6.04 Impact Factor
  • Anna Zumbansen, Isabelle Peretz, Sylvie Hébert
    [Show abstract] [Hide abstract]
    ABSTRACT: Melodic intonation therapy (MIT) is a structured protocol for language rehabilitation in people with Broca's aphasia. The main particularity of MIT is the use of intoned speech, a technique in which the clinician stylizes the prosody of short sentences using simple pitch and rhythm patterns. In the original MIT protocol, patients must repeat diverse sentences in order to espouse this way of speaking, with the goal of improving their natural, connected speech. MIT has long been regarded as a promising treatment but its mechanisms are still debated. Recent work showed that rhythm plays a key role in variations of MIT, leading to consider the use of pitch as relatively unnecessary in MIT. Our study primarily aimed to assess the relative contribution of rhythm and pitch in MIT's generalization effect to non-trained stimuli and to connected speech. We compared a melodic therapy (with pitch and rhythm) to a rhythmic therapy (with rhythm only) and to a normally spoken therapy (without melodic elements). Three participants with chronic post-stroke Broca's aphasia underwent the treatments in hourly sessions, 3 days per week for 6 weeks, in a cross-over design. The informativeness of connected speech, speech accuracy of trained and non-trained sentences, motor-speech agility, and mood was assessed before and after the treatments. The results show that the three treatments improved speech accuracy in trained sentences, but that the combination of rhythm and pitch elicited the strongest generalization effect both to non-trained stimuli and connected speech. No significant change was measured in motor-speech agility or mood measures with either treatment. The results emphasize the beneficial effect of both rhythm and pitch in the efficacy of original MIT on connected speech, an outcome of primary clinical importance in aphasia therapy.
    Frontiers in Human Neuroscience 08/2014; 8:592. · 2.90 Impact Factor
    This article is viewable in ResearchGate's enriched format
  • Source
  • Source
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The inability to vocally match a pitch can be caused by poor pitch perception or by poor vocal-motor control. Although previous studies have tried to examine the relationship between pitch perception and vocal production, they have failed to control for the timbre of the target to be matched. In the present study, we compare pitch-matching accuracy with an unfamiliar instrument (the slider) and with the voice, designed such that the slider plays back recordings of the participant's own voice. We also measured pitch accuracy in singing a familiar melody ("Happy Birthday") to assess the relationship between single-pitch-matching tasks and melodic singing. Our results showed that participants (all nonmusicians) were significantly better at matching recordings of their own voices with the slider than with their voice, indicating that vocal-motor control is an important limiting factor on singing ability. We also found significant correlations between the ability to sing a melody in tune and vocal pitch matching, but not pitch matching on the slider. Better melodic singers also tended to have higher quality voices (as measured by acoustic variables). These results provide important evidence about the role of vocal-motor control in poor singing ability and demonstrate that single-pitch-matching tasks can be useful in measuring general singing abilities.
    Attention Perception & Psychophysics 07/2014; · 2.15 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Music is an integral part of the cultural heritage of all known human societies, with the capacity for music perception and production present in most people. Researchers generally agree that both genetic and environmental factors contribute to the broader realization of music ability, with the degree of music aptitude varying, not only from individual to individual, but across various components of music ability within the same individual. While environmental factors influencing music development and expertise have been well investigated in the psychological and music literature, the interrogation of possible genetic influences has not progressed at the same rate. Recent advances in genetic research offer fertile ground for exploring the genetic basis of music ability. This paper begins with a brief overview of behavioral and molecular genetic approaches commonly used in human genetic analyses, and then critically reviews the key findings of genetic investigations of the components of music ability. Some promising and converging findings have emerged, with several loci on chromosome 4 implicated in singing and music perception, and certain loci on chromosome 8q implicated in absolute pitch and music perception. The gene AVPR1A on chromosome 12q has also been implicated in music perception, music memory, and music listening, whereas SLC6A4 on chromosome 17q has been associated with music memory and choir participation. Replication of these results in alternate populations and with larger samples is warranted to confirm the findings. Through increased research efforts, a clearer picture of the genetic mechanisms underpinning music ability will hopefully emerge.
    Frontiers in Psychology 06/2014; 5:658. · 2.80 Impact Factor
  • Dawn L Merrett, Isabelle Peretz, Sarah J Wilson
    [Show abstract] [Hide abstract]
    ABSTRACT: Singing has been used in language rehabilitation for decades, yet controversy remains over its effectiveness and mechanisms of action. Melodic Intonation Therapy (MIT) is the most well-known singing-based therapy; however, speculation surrounds when and how it might improve outcomes in aphasia and other language disorders. While positive treatment effects have been variously attributed to different MIT components, including melody, rhythm, hand-tapping, and the choral nature of the singing, there is uncertainty about the components that are truly necessary and beneficial. Moreover, the mechanisms by which the components operate are not well understood. Within the literature to date, proposed mechanisms can be broadly grouped into four categories: (1) neuroplastic reorganization of language function, (2) activation of the mirror neuron system and multimodal integration, (3) utilization of shared or specific features of music and language, and (4) motivation and mood. In this paper, we review available evidence for each mechanism and propose that these mechanisms are not mutually exclusive, but rather represent different levels of explanation, reflecting the neurobiological, cognitive, and emotional effects of MIT. Thus, instead of competing, each of these mechanisms may contribute to language rehabilitation, with a better understanding of their relative roles and interactions allowing the design of protocols that maximize the effectiveness of singing therapy for aphasia.
    Frontiers in Human Neuroscience 06/2014; 8:401. · 2.90 Impact Factor
    This article is viewable in ResearchGate's enriched format
  • [Show abstract] [Hide abstract]
    ABSTRACT: Previous studies have suggested that presenting to-be-memorised lyrics in a singing mode, instead of a speaking mode, may facilitate learning and retention in normal adults. In this study, seven healthy older adults and eight participants with mild Alzheimer's disease (AD) learned and memorised lyrics that were either sung or spoken. We measured the percentage of words recalled from these lyrics immediately and after 10 minutes. Moreover, in AD participants, we tested the effect of successive learning episodes for one spoken and one sung excerpt, as well as long-term retention after a four week delay. Sung conditions did not influence lyrics recall in immediate recall but increased delayed recall for both groups. In AD, learning slopes for sung and spoken lyrics did not show a significant difference across successive learning episodes. However, sung lyrics showed a slight advantage over spoken ones after a four week delay. These results suggest that singing may increase the load of initial learning but improve long-term retention of newly acquired verbal information. We further propose some recommendations on how to maximise these effects and make them relevant for therapeutic applications.
    Neuropsychological Rehabilitation 06/2014; · 2.07 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Strong links between music and motor functions suggest that music could represent an interesting aid for motor learning. The present study aims for the first time to test the potential of music to assist in the learning of sequences of gestures in normal and pathological aging. Participants with mild Alzheimer's disease (AD) and healthy older adults (controls) learned sequences of meaningless gestures that were either accompanied by music or a metronome. We also manipulated the learning procedure such that participants had to imitate the gestures to-be-memorized in synchrony with the experimenter or after the experimenter during encoding. Results show different patterns of performance for the two groups. Overall, musical accompaniment had no impact on the controls' performance but improved those of AD participants. Conversely, synchronization of gestures during learning helped controls but seemed to interfere with retention in AD. We discuss these findings regarding their relevance for a better understanding of auditory-motor memory, and we propose recommendations to maximize the mnemonic effect of music for motor sequence learning for dementia care.
    Frontiers in Human Neuroscience 05/2014; 8:294. · 2.90 Impact Factor
    This article is viewable in ResearchGate's enriched format
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Intrinsic emotional expressions such as those communicated by faces and vocalizations have been shown to engage specific brain regions, such as the amygdala. Although music constitutes another powerful means to express emotions, the neural substrates involved in its processing remain poorly understood. In particular, it is unknown whether brain regions typically associated with processing "biologically-relevant" emotional expressions are also recruited by emotional music. To address this question, we conducted an event-related fMRI study in 47 healthy volunteers in which we directly compared responses to basic emotions (fear, sadness and happiness, as well as neutral) expressed through faces, nonlinguistic vocalizations and short, novel musical excerpts. Our results confirmed the importance of fear in emotional communication, as revealed by significant BOLD signal increased in a cluster within the posterior amygdala and anterior hippocampus, as well as in the posterior insula across all three domains. Moreover, subject-specific amygdala responses to fearful music and vocalizations were correlated, consistent with the proposal that the brain circuitry involved in the processing of musical emotions might be shared with the one that have evolved for vocalizations. Overall, our results show that processing of fear expressed through music, engages some of the same brain areas known to be crucial for detecting and evaluating threat-related information.
    Social Cognitive and Affective Neuroscience 05/2014; · 5.04 Impact Factor

Publication Stats

7k Citations
921.34 Total Impact Points

Institutions

  • 2007–2014
    • McGill University
      • Department of Music Research
      Montréal, Quebec, Canada
    • Montreal Heart Institute
      • Research Centre
      Montréal, Quebec, Canada
  • 1995–2014
    • Université du Québec à Montréal
      • Department of Psychology
      Montréal, Quebec, Canada
  • 1987–2014
    • Université de Montréal
      • • Department of Psychology
      • • International Laboratory for Brain, Music and Sound Research
      Montréal, Quebec, Canada
  • 2011–2012
    • Catholic University of Louvain
      • Institute of Neuroscience
      Louvain-la-Neuve, WAL, Belgium
    • French Institute of Health and Medical Research
      Lutetia Parisorum, Île-de-France, France
    • University of Franche-Comté
      Becoinson, Franche-Comté, France
  • 2010
    • Université Victor Segalen Bordeaux 2
      Burdeos, Aquitaine, France
    • University of Lyon
      Lyons, Rhône-Alpes, France
    • Beijing Normal University
      • State Key Laboratory of Cognitive Neuroscience and Learning
      Peping, Beijing, China
    • Aristotle University of Thessaloniki
      • Τμήμα Ψυχολογίας
      Thessaloníki, Kentriki Makedonia, Greece
  • 2005–2009
    • Wyższa Szkoła Finansów i Zarządzania w Warszawie
      • Department of Cognitive Psychology
      Warszawa, Masovian Voivodeship, Poland
  • 2003–2009
    • University of Helsinki
      • Cognitive Brain Research Unit
      Helsinki, Uusimaa, Finland
  • 2008
    • American University Washington D.C.
      Washington, Washington, D.C., United States
  • 2006
    • Keele University
      • School of Psychology
      Newcastle under Lyme, ENG, United Kingdom
    • Université Charles-de-Gaulle Lille 3
      Lille, Nord-Pas-de-Calais, France
  • 2004
    • Newcastle University
      • Institute of Neuroscience
      Newcastle-on-Tyne, England, United Kingdom
  • 1980–1987
    • Université Libre de Bruxelles
      • Research Centre of Cognition and Neurosciences (CRNC)
      Bruxelles, Brussels Capital Region, Belgium