ArticleLiterature Review

A domain-general perspective on the role of the basal ganglia in language and music: Benefits of music therapy for the treatment of aphasia

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The literature shows a strong correlation between language and subcortical lesions, especially in the basal ganglia, which are involved in rhythm processing, temporal prediction, motor programming and execution [19]. For this reason, in this study, we described the effects of an individualized music therapy treatment in a patient with total suppression of language and absence of non-verbal communication after a cerebrovascular event. ...
Article
Full-text available
Patients affected by global aphasia are no longer able to understand, produce, name objects, write and read. It occurs as a result of functional damage of ischemic or hemorrhagic origin affecting the entire peri-silvan region and frontal operculum. Rehabilitation training aims to promote an early intervention in the acute phase. We described a case of a 57-year-old female patient with left intraparenchymal fronto-temporo-parietal cerebral hemorrhage and right hemiplegia. After admission to clinical rehabilitative center, the patient was not able to perform simple orders and she presented a severe impairment of auditory and written comprehension. Eloquence was characterized by stereotypical emission of monosyllabic sounds and showed compromised praxis-constructive abilities. Rehabilitation included a program of Neurologic Music Therapy (NMT), specifically Symbolic Communication Training Through Music (SYCOM) and Musical Speech Stimulation (MUSTIM). Rehabilitative treatment was measured by improved cognitive and language performance of the patient from T0 to T1. Music rehabilitative interventions and continuous speech therapy improve visual attention and communicative intentionality. In order to confirm the effectiveness of data presented, further extensive studies of the sample would be necessary, to assess the real role of music therapy in post-stroke global aphasia.
... reported sub-cortical structures by literature review (bilateral amygdala and caudate in basal ganglia, bilateral hippocampus), because of their important function in semantic memory (Duff et al., 2019;Klooster et al., 2020) and language processing (Shaw et al., 2016;Shi and Zhang, 2020). These 6 ROIs were also defined as 5-mm spheres around the center coordinates based on AAL atlas (Tzourio-Mazoyer et al., 2002). ...
Article
Full-text available
Poststroke aphasia is one of the most dramatic functional deficits that results from direct damage of focal brain regions and dysfunction of large-scale brain networks. The reconstruction of language function depends on the hierarchical whole-brain dynamic reorganization. However, investigations into the longitudinal neural changes of large-scale brain networks for poststroke aphasia remain scarce. Here we characterize large-scale brain dynamics in left-frontal-stroke aphasia through energy landscape analysis. Using fMRI during an auditory comprehension task, we find that aphasia patients suffer serious whole-brain dynamics perturbation in the acute and subacute stages after stroke, in which the brains were restricted into two major activity patterns. Following spontaneous recovery process, the brain flexibility improved in the chronic stage. Critically, we demonstrated that the abnormal neural dynamics are correlated with the aberrant brain network coordination. Taken together, the energy landscape analysis exhibited that the acute poststroke aphasia has a constrained, low dimensional brain dynamics, which were replaced by less constrained and high dimensional dynamics at chronic aphasia. Our study provides a new perspective to profoundly understand the pathological mechanisms of poststroke aphasia.
... With the empowerment of artificial intelligence, music therapy technology has made innovative development in the whole process of "diagnosis, treatment and evaluation" (Ramirez et al., 2018). Although the effect of traditional music therapy has been generally recognized and accepted, its existing technical means still have some defects, such as inaccurate targeting (e.g., lack of pathological pertinence, ignoring individual differences), time-consuming and laborious (e.g., high labor cost, limited site), low Professionalism (e.g., unsystematic efficacy evaluation indicators), and privacy disclosure (Shi and Zhang, 2020). The rapid development of natural language processing, machine vision, speech recognition and other fields provides ideas for the whole process innovation of "diagnosis treatment evaluation" of music therapy (Qu and Xiong, 2012). ...
Article
Full-text available
Music can express people’s thoughts and emotions. Music therapy is to stimulate and hypnotize the human brain by using various forms of music activities, such as listening, singing, playing and rhythm. With the empowerment of artificial intelligence, music therapy technology has made innovative development in the whole process of “diagnosis, treatment and evaluation.” It is necessary to make use of the advantages of artificial intelligence technology to innovate music therapy methods, ensure the accuracy of treatment schemes, and provide more paths for the development of the medical field. This paper proposes an long short-term memory (LSTM)-based generation and classification algorithm for multi-voice music data. A Multi-Voice Music Generation system called MVMG based on the algorithm is developed. MVMG contains two main steps. At first, the music data are modeled to the MDPI and text sequence data by using an autoencoder model, including music features extraction and music clip representation. And then an LSTM-based music generation and classification model is developed for generating and analyzing music in specific treatment scenario. MVMG is evaluated based on the datasets collected by us: the single-melody MIDI files and the Chinese classical music dataset. The experiment shows that the highest accuracy of the autoencoder-based feature extractor can achieve 95.3%. And the average F1-score of LSTM is 95.68%, which is much higher than the DNN-based classification model.
... Whereas music education studies have mainly focused on teaching methodologies (e.g., Pozo et al., 2022), or perceptions about learning (López-Íñiguez and Pozo, 2014;García-Gil et al., 2021), social and cultural research has provided an extension of music considerations by exploring the benefits of music therapy (Gómez-Romero et al., 2017;Shi and Zhang, 2020), analyzing different cross-cultural approximations to music (Cross, 2001), and suppo rting the role of music as a social activity in musicians (Volpe et al., 2016), across the general population (D'Ausilio et al., 2015). ...
... Therefore, due to the limitations of the fNIRS technology, functional connectivity in the cerebral cortex can only be detected. The functional connectivity information of deep brain regions, such as basal ganglia involving in auditory processing [54], is not available. Additionally, the obtained fNIRS signals are confounded by extracranial signals. ...
Article
Full-text available
Humans have the ability to appreciate and create music. However, why and how humans have this distinctive ability to perceive music remains unclear. Additionally, the investigation of the innate perceiving skill in humans is compounded by the fact that we have been actively and passively exposed to auditory stimuli or have systematically learnt music after birth. Therefore, to explore the innate musical perceiving ability, infants with preterm birth may be the most suitable population. In this study, the auditory brain networks were explored using dynamic functional connectivity-based reliable component analysis (RCA) in preterm infants during music listening. The brain activation was captured by portable functional near-infrared spectroscopy (fNIRS) to simulate a natural environment for preterm infants. The components with the maximum inter-subject correlation were extracted. The generated spatial filters identified the shared spatial structural features of functional brain connectivity across subjects during listening to the common music, exhibiting a functional synchronization between the right temporal region and the frontal and motor cortex, and synchronization between the bilateral temporal regions. The specific pattern is responsible for the functions involving music comprehension, emotion generation, language processing, memory, and sensory. The fluctuation of the extracted components and the phase variation demonstrates the interactions between the extracted brain networks to encode musical information. These results are critically important for our understanding of the underlying mechanisms of the innate perceiving skills at early ages of human during naturalistic music listening.
... The basal ganglia, Nadeau continues, do not have strict language functionalities, with the possible exception of representing movement verbs. Further, a fascinating review by Shi & Zhang (2020) on music therapy as treatment for aphasia suggests the basal ganglia facilitate language only insofar as they handle rhythm and beat processing, temporal prediction, and the execution of motor programs. As with the basal ganglia, Nadeau (2021) emphasized the purely computational nature of thalamic circuits involving language centers. ...
Preprint
Full-text available
Aphasia, the loss of language ability following damage to the brain, is among the most disabling and common consequences of stroke. Subcortical stroke, occurring in the basal ganglia, thalamus, and/or deep white matter can result in aphasia, often characterized by word fluency, motor speech output, or sentence generation impairments. The link between greater lesion volume and acute aphasia is well documented, but the independent contributions of lesion location, cortical hypoperfusion, prior stroke, and white matter degeneration (leukoaraiosis) remain unclear, particularly in subcortical aphasia. Thus, we aimed to disentangle the contributions of each factor on language impairments in left hemisphere acute subcortical stroke survivors. Eighty patients with acute left hemisphere subcortical stroke (less than 10 days post-onset) participated. We manually traced acute lesions on diffusion-weighted scans and prior lesions on T2-weighted scans. Leukoaraiosis was rated on T2-weighted scans using the Fazekas et al. (1987) scale. Fluid-attenuated inversion recovery (FLAIR) scans were evaluated for hyperintense vessels in each vascular territory, providing an indirect measure of hypoperfusion in lieu of perfusion-weighted imaging. Compared to subcortical stroke patients without aphasia, patients with aphasia had greater acute and total lesion volume, were older, and had significantly greater damage to the internal capsule (which did not survive controlling for total lesion volume). Patients with aphasia did not differ from non-aphasic patients by other demographic or stroke variables. Age was the only significant predictor of aphasia status in a logistic regression model. Further examination of three participants with severe language impairments suggests that their deficits result from impairment in domain-general, rather than linguistic, processes. Given the variability in language deficits and imaging markers associated with such deficits, it seems likely that subcortical aphasia is a heterogeneous clinical syndrome with distinct causes across individuals.
... The basal ganglia are considered to be involved, among other things, in the integration of auditory input into speech motor movements [94]. In particular, the involvement of the basal ganglia in rhythm processing and temporal prediction is widely discussed [95][96][97]. For patients with PD, it was shown that they might benefit from exposure to a temporally predictable, regular auditory cue [98]. ...
Article
Full-text available
Citation: Aichert, I.; Lehner, K.; Falk, S.; Späth, M.; Franke, M.; Ziegler, W. Abstract: In the present study, we investigated if individuals with neurogenic speech sound impairments of three types, Parkinson's dysarthria, apraxia of speech, and aphasic phonological impairment, accommodate their speech to the natural speech rhythm of an auditory model, and if so, whether the effect is more significant after hearing metrically regular sentences as compared to those with an irregular pattern. This question builds on theories of rhythmic entrainment, assuming that sensorimotor predictions of upcoming events allow humans to synchronize their actions with an external rhythm. To investigate entrainment effects, we conducted a sentence completion task relating participants' response latencies to the spoken rhythm of the prime heard immediately before. A further research question was if the perceived rhythm interacts with the rhythm of the participants' own productions, i.e., the trochaic or iambic stress pattern of disyllabic target words. For a control group of healthy speakers, our study revealed evidence for entrainment when trochaic target words were preceded by regularly stressed prime sentences. Persons with Parkinson's dysarthria showed a pattern similar to that of the healthy individuals. For the patient groups with apraxia of speech and with phonological impairment, considerably longer response latencies with differing patterns were observed. Trochaic target words were initiated with significantly shorter latencies, whereas the metrical regularity of prime sentences had no consistent impact on response latencies and did not interact with the stress pattern of the target words to be produced. The absence of an entrainment in these patients may be explained by the more severe difficulties in initiating speech at all. We discuss the results in terms of clinical implications for diagnostics and therapy in neurogenic speech disorders.
... First, it has been clear that some aspects of music and language have the common neural basis . From the clinical perspective, Shi and Zhang (2020) highlight the function of rhythm processing of the cortical-basal ganglia loop for both cognitive domains. To be more specific, we propose that the basal ganglia loop is responsible for transferring hierarchy to linearization in music and language, which is supported by the mechanism of temporal prediction, motor programing, and execution. ...
Article
Savage et al. argue for musicality as having evolved for the overarching purpose of social bonding. By way of contrast, we highlight contemporary predictive processing models of human cognitive functioning in which the production and enjoyment of music follows directly from the principle of prediction error minimization.
... First, it has been clear that some aspects of music and language have the common neural basis . From the clinical perspective, Shi and Zhang (2020) highlight the function of rhythm processing of the cortical-basal ganglia loop for both cognitive domains. To be more specific, we propose that the basal ganglia loop is responsible for transferring hierarchy to linearization in music and language, which is supported by the mechanism of temporal prediction, motor programing, and execution. ...
Article
We propose that not social bonding, but rather a different mechanism underlies the development of musicality: being unable to survive alone. The evolutionary constraint of being dependent on other humans for survival provides the ultimate driving force for acquiring human faculties such as sociality and musicality, through mechanisms of learning and neural plasticity. This evolutionary mechanism maximizes adaptation to a dynamic environment.
... In an fMRI study, the authors found increased activation in both right and left basal ganglia when the participants process sentences with complicated syntactic structures (Progovac et al., 2018). From clinical perspective, Shi & Zhang (2020) propose that the basal ganglia are involved in the process where hierarchical syntactic structures are transferred into linearized structures. ...
Conference Paper
Full-text available
By re-evaluating Crow (2000)'s claim that "Schizophrenia [is] the price that Homo sapiens pays for language", we suggest that displacement, the ability to refer to things and situations outside from here and now, partly realized through syntactic operation, could be related to the symptoms of schizophrenia. Mainly supported by episodic memory, displacement has been found in nonhuman animals, but more limited than that in humans. As a conserved subcortical region, the hippocampus plays a key role in episodic memory across species. Evidence in humans suggests that the parietal lobe and basal ganglia are also involved in episodic Memory. We propose that what makes human displacement more developed could rely on the better coordination between the hippocampus and the parietal lobe and basal ganglia. Given that all these areas taking part in language processing, displacement could have served as an interface between episodic memory and language.
... Vast anecdotal stories from caregivers and professionals present people with AD who can hardly speak but are able to sing. Research in the field of music cognition has attempted to explain the complexity of this neurologic phenomenon by suggesting a common neural basis for language and music [11]. Music and speech processing share a large number of properties; therefore several areas of the brain are expected to overlap in the processing of music and speech and enable the development of music as stimulation for other functions [12][13]. ...
Article
While singing in music therapy with people with Alzheimer's disease (AD) is vastly documented, scarce research deals with the impact of singing on their language abilities. This study addressed the issue of language decline in AD and explored the impact of group singing on the language abilities of people with moderate to severe-stage AD. Participants were randomized to experimental (n=16) or wait-list control (n=14) groups. The experimental group received eight music therapy group sessions, which focused on singing, while both groups received the standard treatment. The data analysis included pre-post picture description tests and examination of speech parameters throughout the group sessions. A significant difference was demonstrated between the groups in the proportion of non-coherent speech in relation to total speech used by participants. The experimental group did not exhibit a deterioration in coherent speech, while the control group exhibited an increase in non-coherent speech in proportion to the total speech used by participants. The findings also indicated that participants in the experimental group showed an improvement in speech parameters as well as in their ability to sing. Singing in music therapy with people with AD can play a significant role in preserving speech and encouraging conversation abilities. Keywords: Alzheimer's disease, music therapy, group singing, language abilities, speech
... For example, Kotz and colleagues (2018) emphasized the tight 863 relationship between rhythm in speech and music. One recent review also suggests that 864 temporal structure (rhythm in particular) processing influences speech processing of non-fluent 865 aphasia and proposes that rhythm processing as the function of temporal prediction is a shared 866 component underlying music and speech (Shi & Zhang, 2020). Moreover, processing of 867 linguistic meter and syntax was suggested to interact in P600 (Schmidt-Kassow & . ...
Article
Full-text available
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Article
Full-text available
1) Introduction: Neurologic music therapy (NMT) is a non-pharmacological approach of interaction through the therapeutic use of music in motor, sensory and cognitive dysfunctions caused by damage or diseases of the nervous system. (2) Objective: This study aimed to critically appraise the available literature on the application of particular NMT techniques in the rehabilitation of geriatric disorders. (3) Methods: PubMed, ScienceDirect and EBSCOhost databases were searched. We considered randomized controlled trials (RCTs) from the last 12 years using at least one of the NMT techniques from the sensorimotor, speech/language and cognitive domains in the therapy of patients over 60 years old and with psychogeriatric disorders. (4) Results: Of the 255 articles, 8 met the inclusion criteria. All papers in the final phase concerned the use of rhythmic auditory stimulation (RAS) (sensorimotor technique) in the rehabilitation of both Parkinson's disease (PD) patients (six studies) and stroke patients (SPs) (two studies). (5) Conclusion: All reports suggest that the RAS technique has a significant effect on the improvement of gait parameters and the balance of PD patients and SPs, as well as the risk of falls in PD patients.
Article
Full-text available
Aphasia, the loss of language ability following damage to the brain, is among the most disabling and common consequences of stroke. Subcortical stroke, occurring in the basal ganglia, thalamus, and/or deep white matter can result in aphasia, often characterized by word fluency, motor speech output, or sentence generation impairments. The link between greater lesion volume and acute aphasia is well documented, but the independent contributions of lesion location, cortical hypoperfusion, prior stroke, and white matter degeneration (leukoaraiosis) remain unclear, particularly in subcortical aphasia. Thus, we aimed to disentangle the contributions of each factor on language impairments in left hemisphere acute subcortical stroke survivors. Eighty patients with acute ischemic left hemisphere subcortical stroke (less than 10 days post-onset) participated. We manually traced acute lesions on diffusion-weighted scans and prior lesions on T2-weighted scans. Leukoaraiosis was rated on T2-weighted scans using the Fazekas et al. (1987) scale. Fluid-attenuated inversion recovery (FLAIR) scans were evaluated for hyperintense vessels in each vascular territory, providing an indirect measure of hypoperfusion in lieu of perfusion-weighted imaging. We found that language performance was negatively correlated with acute/total lesion volumes and greater damage to substructures of the deep white matter and basal ganglia. We conducted a LASSO regression that included all variables for which we found significant univariate relationships to language performance, plus nuisance regressors. Only total lesion volume was a significant predictor of global language impairment severity. Further examination of three participants with severe language impairments suggests that their deficits result from impairment in domain-general, rather than linguistic, processes. Given the variability in language deficits and imaging markers associated with such deficits, it seems likely that subcortical aphasia is a heterogeneous clinical syndrome with distinct causes across individuals.
Article
Objective The aim of this meta-analysis was to evaluate the evidence on the effectiveness of music therapy in the recovery of language function in post-stroke aphasia, compared with conventional therapy or no therapy.Methods We searched studies that explored the effect of music therapy on language function in post-stroke aphasia and published in PubMed, Embase, the Cochrane Library, Web of Science, CINAHL, ProQuest Digital Dissertations, and ClinicalTrials.gov from inception to March 2021. Six reviewers independently screened out eligible studies, extracted data, and evaluated the methodological quality. Results were pooled using mean difference (MD) with 95% confidence interval (CI). Heterogeneity was assessed by the chi-square test and I2 statistic.ResultsSix studies were included in this meta-analysis involving 115 patients. The methodological quality of these studies ranged from poor to excellent. There was significant mean difference in functional communication for post-stroke aphasia by 1.45 (95% CI: 0.24, 2.65; P = 0.02, from poor to excellent evidence), in repetition by 6.49 (95% CI: 0.97, 12.00; P = 0.02, from acceptable to excellent evidence), and in naming by 11.44 (95% CI: 1.63, 21.26; P = 0.02, from acceptable to excellent evidence). But there was no significant difference in comprehension for post-stroke aphasia by 7.21 (95% CI: − 10.88, 25.29; P = 0.43, from acceptable to excellent evidence).Conclusions Music therapy can improve functional communication, repetition, and naming in patients with post-stroke aphasia, but did not significantly improve comprehension.Trial registrationCRD42021251526
Article
By focusing on the contributions of subcortical structures, our commentary suggests that the functions of the hippocampus underlying “displacement,” a feature enabling humans to communicate things and situations that are remote in space and time, make language more effective at social bonding. Based on the functions of the basal ganglia and hippocampus, evolutionary trajectory of the subcomponents of music and language in different species will also be discussed.
Article
Full-text available
We extend Savage et al.’s music and social bonding hypothesis by examining it in the context of Chinese music. First, top-down functions such as music as political instrument should receive more attention. Second, solo performance can serve as important cues for social identity. Third, a right match between the tones in lyrics and music contributes also to social bonding.
Article
Full-text available
Behavioral and brain rhythms in the millisecond-to-second range are central in human music, speech, and movement. A comparative approach can further our understanding of the evolution of rhythm processing by identifying behavioral and neural similarities and differences across cognitive domains and across animal species. We provide an overview of research into rhythm cognition in music, speech, and animal communication. Rhythm has received considerable attention within each individual field, but to date, little integration. This review article on rhythm processing incorporates and extends existing ideas on temporal processing in speech and music and offers suggestions about the neural, biological, and evolutionary bases of human abilities in these domains.
Article
Full-text available
Melodic intonation therapy (MIT) is a treatment program for the rehabilitation of aphasic patients with speech production disorders. We report a case of severe chronic non-fluent aphasia unresponsive to several years of conventional therapy that showed a marked improvement following intensive 9-day training on the Japanese version of MIT (MIT-J). The purpose of this study was to verify the efficacy of MIT-J by functional assessment and examine associated changes in neural processing by functional magnetic resonance imaging. MIT improved language output and auditory comprehension, and decreased the response time for picture naming. Following MIT-J, an area of the right hemisphere was less activated on correct naming trials than compared with before training but similarly activated on incorrect trials. These results suggest that the aphasic symptoms of our patient were improved by increased neural processing efficiency and a concomitant decrease in cognitive load.
Article
Full-text available
The present study investigates the neural correlates of rhythm processing in speech perception. German pseudosentences spoken with an exaggerated (isochronous) or a conversational (nonisochronous) rhythm were compared in an auditory functional magnetic resonance imaging experiment. The subjects had to perform either a rhythm task (explicit rhythm processing) or a prosody task (implicit rhythm processing). The study revealed bilateral activation in the supplementary motor area (SMA), extending into the cingulate gyrus, and in the insulae, extending into the right basal ganglia (neostriatum), as well as activity in the right inferior frontal gyrus (IFG) related to the performance of the rhythm task. A direct contrast between isochronous and nonisochronous sentences revealed differences in lateralization of activation for isochronous processing as a function of the explicit and implicit tasks. Explicit processing revealed activation in the right posterior superior temporal gyrus (pSTG), the right supramarginal gyrus, and the right parietal operculum. Implicit processing showed activation in the left supramarginal gyrus, the left pSTG, and the left parietal operculum. The present results indicate a function of the SMA and the insula beyond motor timing and speak for a role of these brain areas in the perception of acoustically temporal intervals. Secondly, the data speak for a specific task-related function of the right IFG in the processing of accent patterns. Finally, the data sustain the assumption that the right secondary auditory cortex is involved in the explicit perception of auditory suprasegmental cues and, moreover, that activity in the right secondary auditory cortex can be modulated by top-down processing mechanisms.
Article
Full-text available
Novel rehabilitation interventions have improved motor recovery by induction of neural plasticity in individuals with stroke. Of these, Music-supported therapy (MST) is based on music training designed to restore motor deficits. Music training requires multimodal processing, involving the integration and co-operation of visual, motor, auditory, affective and cognitive systems. The main objective of this study was to assess, in a group of 20 individuals suffering from chronic stroke, the motor, cognitive, emotional and neuroplastic effects of MST. Using functional magnetic resonance imaging (fMRI) we observed a clear restitution of both activity and connectivity among auditory-motor regions of the affected hemisphere. Importantly, no differences were observed in this functional network in a healthy control group, ruling out possible confounds such as repeated imaging test- ing. Moreover, this increase in activity and connectivity be- tween auditory and motor regions was accompanied by a functional improvement of the paretic hand. The present results confirm MST as a viable intervention to improve motor function in chronic stroke individuals
Article
Full-text available
It is often claimed that music and language share a process of hierarchical structure building, a mental "syntax." Although several lines of research point to commonalities, and possibly a shared syntactic component, differences between "language syntax" and "music syntax" can also be found at several levels: conveyed meaning, and the atoms of combination, for example. To bring music and language closer to one another, some researchers have suggested a comparison between music and phonology ("phonological syntax"), but here too, one quickly arrives at a situation of intriguing similarities and obvious differences. In this paper, we suggest that a fruitful comparison between the two domains could benefit from taking the grammar of action into account. In particular, we suggest that what is called "syntax" can be investigated in terms of goal of action, action planning, motor control, and sensory-motor integration. At this level of comparison, we suggest that some of the differences between language and music could be explained in terms of different goals reflected in the hierarchical structures of action planning: the hierarchical structures of music arise to achieve goals with a strong relation to the affective-gestural system encoding tension-relaxation patterns as well as socio-intentional system, whereas hierarchical structures in language are embedded in a conceptual system that gives rise to compositional meaning. Similarities between music and language are most clear in the way several hierarchical plans for executing action are processed in time and sequentially integrated to achieve various goals.
Article
Full-text available
Language impairment is relatively common in Parkinson's disease (PD), but not all PD patients are susceptible to language problems. In this study, we identified among a sample of PD patients those pre-disposed to language impairment, describe their clinical profiles, and consider factors that may precipitate language disability in these patients. A cross-sectional cohort of 31 PD patients and 20 controls were administered the Chinese version of the Western Aphasia Battery (WAB) to assess language abilities, and the Montreal Cognitive Assessment (MoCA) to determine cognitive status. PD patients were then apportioned to a language-impaired PD (LI-PD) group or a PD group with no language impairment (NLI-PD). Performance on the WAB and MoCA was investigated for correlation with the aphasia quotient deterioration rate (AQDR). The PD patients scored significantly lower on most of the WAB subtests than did the controls. The aphasia quotient, cortical quotient, and spontaneous speech and naming subtests of the WAB were significantly different between LI-PD and NLI-PD groups. The AQDR scores significantly and positively correlated with age at onset and motor function deterioration. A subset group was susceptible to language dysfunction, a major deficit in spontaneous speech. Once established, dysphasia progression is closely associated with age at onset and motor disability progression.
Article
Full-text available
The ability to entrain movements to music is arguably universal, but it is unclear how specialized training may influence this. Previous research suggests that percussionists have superior temporal precision in perception and production tasks. Such superiority may be limited to temporal sequences that resemble real music or, alternatively, may generalize to musically implausible sequences. To test this, percussionists and nonpercussionists completed two tasks that used rhythmic sequences varying in musical plausibility. In the beat tapping task, participants tapped with the beat of a rhythmic sequence over 3 stages: finding the beat (as an initial sequence played), continuation of the beat (as a second sequence was introduced and played simultaneously), and switching to a second beat (the initial sequence finished, leaving only the second). The meters of the two sequences were either congruent or incongruent, as were their tempi (minimum inter-onset intervals). In the rhythm reproduction task, participants reproduced rhythms of four types, ranging from high to low musical plausibility: Metric simple rhythms induced a strong sense of the beat, metric complex rhythms induced a weaker sense of the beat, nonmetric rhythms had no beat, and jittered nonmetric rhythms also had no beat as well as low temporal predictability. For both tasks, percussionists performed more accurately than nonpercussionists. In addition, both groups were better with musically plausible than implausible conditions. Overall, the percussionists' superior abilities to entrain to, and reproduce, rhythms generalized to musically implausible sequences.
Article
Full-text available
For thousands of years, human beings have engaged in rhythmic activities such as drumming, dancing, and singing. Rhythm can be a powerful medium to stimulate communication and social interactions, due to the strong sensorimotor coupling. For example, the mere presence of an underlying beat or pulse can result in spontaneous motor responses such as hand clapping, foot stepping, and rhythmic vocalizations. Examining the relationship between rhythm and speech is fundamental not only to our understanding of the origins of human communication but also in the treatment of neurological disorders. In this paper, we explore whether rhythm has therapeutic potential for promoting recovery from speech and language dysfunctions. Although clinical studies are limited to date, existing experimental evidence demonstrates rich rhythmic organization in both music and language, as well as overlapping brain networks that are crucial in the design of rehabilitation approaches. Here, we propose the "SEP" hypothesis, which postulates that (1) "sound envelope processing" and (2) "synchronization and entrainment to pulse" may help stimulate brain networks that underlie human communication. Ultimately, we hope that the SEP hypothesis will provide a useful framework for facilitating rhythm-based research in various patient populations.
Article
Full-text available
Melodic intonation therapy (MIT) is a structured protocol for language rehabilitation in people with Broca's aphasia. The main particularity of MIT is the use of intoned speech, a technique in which the clinician stylizes the prosody of short sentences using simple pitch and rhythm patterns. In the original MIT protocol, patients must repeat diverse sentences in order to espouse this way of speaking, with the goal of improving their natural, connected speech. MIT has long been regarded as a promising treatment but its mechanisms are still debated. Recent work showed that rhythm plays a key role in variations of MIT, leading to consider the use of pitch as relatively unnecessary in MIT. Our study primarily aimed to assess the relative contribution of rhythm and pitch in MIT's generalization effect to non-trained stimuli and to connected speech. We compared a melodic therapy (with pitch and rhythm) to a rhythmic therapy (with rhythm only) and to a normally spoken therapy (without melodic elements). Three participants with chronic post-stroke Broca's aphasia underwent the treatments in hourly sessions, 3 days per week for 6 weeks, in a cross-over design. The informativeness of connected speech, speech accuracy of trained and non-trained sentences, motor-speech agility, and mood was assessed before and after the treatments. The results show that the three treatments improved speech accuracy in trained sentences, but that the combination of rhythm and pitch elicited the strongest generalization effect both to non-trained stimuli and connected speech. No significant change was measured in motor-speech agility or mood measures with either treatment. The results emphasize the beneficial effect of both rhythm and pitch in the efficacy of original MIT on connected speech, an outcome of primary clinical importance in aphasia therapy.
Article
Full-text available
Singing has been used in language rehabilitation for decades, yet controversy remains over its effectiveness and mechanisms of action. Melodic Intonation Therapy (MIT) is the most well-known singing-based therapy; however, speculation surrounds when and how it might improve outcomes in aphasia and other language disorders. While positive treatment effects have been variously attributed to different MIT components, including melody, rhythm, hand-tapping, and the choral nature of the singing, there is uncertainty about the components that are truly necessary and beneficial. Moreover, the mechanisms by which the components operate are not well understood. Within the literature to date, proposed mechanisms can be broadly grouped into four categories: (1) neuroplastic reorganization of language function, (2) activation of the mirror neuron system and multimodal integration, (3) utilization of shared or specific features of music and language, and (4) motivation and mood. In this paper, we review available evidence for each mechanism and propose that these mechanisms are not mutually exclusive, but rather represent different levels of explanation, reflecting the neurobiological, cognitive, and emotional effects of MIT. Thus, instead of competing, each of these mechanisms may contribute to language rehabilitation, with a better understanding of their relative roles and interactions allowing the design of protocols that maximize the effectiveness of singing therapy for aphasia.
Article
Full-text available
Difficulties with temporal coordination or sequencing of speech movements are frequently reported in aphasia patients with concomitant apraxia of speech (AOS). Our major objective was to investigate the effects of specific rhythmic-melodic voice training on brain activation of those patients. Three patients with severe chronic nonfluent aphasia and AOS were included in this study. Before and after therapy, patients underwent the same fMRI procedure as 30 healthy control subjects in our prestudy, which investigated the neural substrates of sung vowel changes in untrained rhythm sequences. A main finding was that post-minus pretreatment imaging data yielded significant perilesional activations in all patients for example, in the left superior temporal gyrus, whereas the reverse subtraction revealed either no significant activation or right hemisphere activation. Likewise, pre- and posttreatment assessments of patients' vocal rhythm production, language, and speech motor performance yielded significant improvements for all patients. Our results suggest that changes in brain activation due to the applied training might indicate specific processes of reorganization, for example, improved temporal sequencing of sublexical speech components. In this context, a training that focuses on rhythmic singing with differently demanding complexity levels as concerns motor and cognitive capabilities seems to support paving the way for speech.
Article
Full-text available
The purpose of this study was to investigate whether or not the right hemisphere can be engaged using Melodic Intonation Therapy (MIT) and excitatory repetitive transcranial magnetic stimulation (rTMS) to improve language function in people with aphasia. The two participants in this study (GOE and AMC) have chronic non-fluent aphasia. A functional Magnetic Resonance Imaging (fMRI) task was used to localize the right Broca's homolog area in the inferior frontal gyrus for rTMS coil placement. The treatment protocol included an rTMS phase, which consisted of 3 treatment sessions that used an excitatory stimulation method known as intermittent theta burst stimulation, and a sham-rTMS phase, which consisted of 3 treatment sessions that used a sham coil. Each treatment session was followed by 40 min of MIT. A linguistic battery was administered after each session. Our findings show that one participant, GOE, improved in verbal fluency and the repetition of phrases when treated with MIT in combination with TMS. However, AMC showed no evidence of behavioral benefit from this brief treatment trial. Post-treatment neural activity changes were observed for both participants in the left Broca's area and right Broca's homolog. These case studies indicate that a combination of MIT and rTMS applied to the right Broca's homolog has the potential to improve speech and language outcomes for at least some people with post-stroke aphasia.
Article
Full-text available
We present a critical review of the literature on melodic intonation therapy (MIT), one of the most formalized treatments used by speech-language therapist in Broca's aphasia. We suggest basic clarifications to enhance the scientific support of this promising treatment. First, therapeutic protocols using singing as a speech facilitation technique are not necessarily MIT. The goal of MIT is to restore propositional speech. The rationale is that patients can learn a new way to speak through singing by using language-capable regions of the right cerebral hemisphere. Eventually, patients are supposed to use this way of speaking permanently but not to sing overtly. We argue that many treatment programs covered in systematic reviews on MIT's efficacy do not match MIT's therapeutic goal and rationale. Critically, we identified two main variations of MIT: the French thérapie mélodique et rythmée (TMR) that trains patients to use singing overtly as a facilitation technique in case of speech struggle and palliative versions of MIT that help patients with the most severe expressive deficits produce a limited set of useful, readymade phrases. Second, we distinguish between the immediate effect of singing on speech production and the long-term effect of the entire program on language recovery. Many results in the MIT literature can be explained by this temporal perspective. Finally, we propose that MIT can be viewed as a treatment of apraxia of speech more than aphasia. This issue should be explored in future experimental studies.
Article
Full-text available
Playing a musical instrument demands the engagement of different neural systems. Recent studies about the musician's brain and musical training highlight that this activity requires the close interaction between motor and somatosensory systems. Moreover, neuroplastic changes have been reported in motor-related areas after short and long-term musical training. Because of its capacity to promote neuroplastic changes, music has been used in the context of stroke neurorehabilitation. The majority of patients suffering from a stroke have motor impairments, preventing them to live independently. Thus, there is an increasing demand for effective restorative interventions for neurological deficits. Music-supported Therapy (MST) has been recently developed to restore motor deficits. We report data of a selected sample of stroke patients who have been enrolled in a MST program (1 month intense music learning). Prior to and after the therapy, patients were evaluated with different behavioral motor tests. Transcranial Magnetic Stimulation (TMS) was applied to evaluate changes in the sensorimotor representations underlying the motor gains observed. Several parameters of excitability of the motor cortex were assessed as well as the cortical somatotopic representation of a muscle in the affected hand. Our results revealed that participants obtained significant motor improvements in the paretic hand and those changes were accompanied by changes in the excitability of the motor cortex. Thus, MST leads to neuroplastic changes in the motor cortex of stroke patients which may explain its efficacy.
Article
Full-text available
Language and music epitomize the complex representational and computational capacities of the human mind. Strikingly similar in their structural and expressive features, a longstanding question is whether the perceptual and cognitive mechanisms underlying these abilities are shared or distinct - either from each other or from other mental processes. One prominent feature shared between language and music is signal encoding using pitch, conveying pragmatics and semantics in language and melody in music. We investigated how pitch processing is shared between language and music by measuring consistency in individual differences in pitch perception across language, music, and three control conditions intended to assess basic sensory and domain-general cognitive processes. Individuals' pitch perception abilities in language and music were most strongly related, even after accounting for performance in all control conditions. These results provide behavioral evidence, based on patterns of individual differences, that is consistent with the hypothesis that cognitive mechanisms for pitch processing may be shared between language and music.
Article
Full-text available
There is an ongoing debate as to whether singing helps left-hemispheric stroke patients recover from non-fluent aphasia through stimulation of the right hemisphere. According to recent work, it may not be singing itself that aids speech production in non-fluent aphasic patients, but rhythm and lyric type. However, the long-term effects of melody and rhythm on speech recovery are largely unknown. In the current experiment, we tested 15 patients with chronic non-fluent aphasia who underwent either singing therapy, rhythmic therapy, or standard speech therapy. The experiment controlled for phonatory quality, vocal frequency variability, pitch accuracy, syllable duration, phonetic complexity and other influences, such as the acoustic setting and learning effects induced by the testing itself. The results provide the first evidence that singing and rhythmic speech may be similarly effective in the treatment of non-fluent aphasia. This finding may challenge the view that singing causes a transfer of language function from the left to the right hemisphere. Instead, both singing and rhythmic therapy patients made good progress in the production of common, formulaic phrases-known to be supported by right corticostriatal brain areas. This progress occurred at an early stage of both therapies and was stable over time. Conversely, patients receiving standard therapy made less progress in the production of formulaic phrases. They did, however, improve their production of non-formulaic speech, in contrast to singing and rhythmic therapy patients, who did not. In light of these results, it may be worth considering the combined use of standard therapy and the training of formulaic phrases, whether sung or rhythmically spoken. Standard therapy may engage, in particular, left perilesional brain regions, while training of formulaic phrases may open new ways of tapping into right-hemisphere language resources-even without singing.
Article
Full-text available
Background : The preservation of swearing, serial speech, and speech formulas is well documented in clinical descriptions of aphasia. Proper nouns and sentence stems have also been reported in the residual speech of severely aphasic subjects. The incidence of formulaic expressions in spontaneous speech of right‐hemisphere‐damaged subjects has not yet been well examined. Recent interest in formulaic expressions (FEs) in normal language use, combined with the converging evidence of a role for the right hemisphere in processing pragmatic elements of language, led to this study. Methods & Procedures : We undertook an examination of hypotheses about the hemispheric processing of FEs in the spontaneous speech of persons with left hemisphere (LH) and right hemisphere (RH) damage. Based on preserved use of formulaic expressions in clinically described aphasic speech, the hypothesis under examination in this study was that the intact RH has a role in the production of formulaic expressions. Further inquiries involved possible differences in incidence in the speech samples between subsets of FEs, such as proper nouns and discourse particles. Outcomes & Results : Our results indicate a greater proportion of FEs in the spontaneous speech of persons with LH damage, and proportionally fewer FEs in RH speech, when compared to normal control speakers. Examination of the incidence of separate categories indicates a paucity of proper noun production in the LH group, supporting the association of proper noun anomia with LH damage. Pragmatically determined vocal elements (pause fillers, discourse elements) were least present in RH dysfunction. These results suggest that clinical evaluation of formulaic as well as novel language functions may give important insights into the language disorder profile of various neurological populations. The identification of relatively preserved formulaic expressions in LH damage may provide a basis for a more effective treatment plan, while evaluation of RH damaged individuals using this perspective may identify communication disorders not previously recognised. Conclusions : These results support the notion that an intact RH supports use of some types of formulaic language.
Article
Full-text available
The "basal ganglia" refers to a group of subcortical nuclei responsible primarily for motor control, as well as other roles such as motor learning, executive functions and behaviors, and emotions. Proposed more than two decades ago, the classical basal ganglia model shows how information flows through the basal ganglia back to the cortex through two pathways with opposing effects for the proper execution of movement. Although much of the model has remained, the model has been modified and amplified with the emergence of new data. Furthermore, parallel circuits subserve the other functions of the basal ganglia engaging associative and limbic territories. Disruption of the basal ganglia network forms the basis for several movement disorders. This article provides a comprehensive account of basal ganglia functional anatomy and chemistry and the major pathophysiological changes underlying disorders of movement. We try to answer three key questions related to the basal ganglia, as follows: What are the basal ganglia? What are they made of? How do they work? Some insight on the canonical basal ganglia model is provided, together with a selection of paradoxes and some views over the horizon in the field.
Article
Full-text available
Using an adapted version of Melodic Intonation Therapy (MIT), we treated an adolescent girl with a very large left-hemisphere lesion and severe nonfluent aphasia secondary to an ischemic stroke. At the time of her initial assessment 15 months after her stroke, she had reached a plateau in her recovery despite intense and long-term traditional speech-language therapy (approximately five times per week for more than one year). Following an intensive course of treatment with our adapted form of MIT, her performance improved on both trained and untrained phrases, as well as on speech and language tasks. These behavioral improvements were accompanied by functional MRI changes in the right frontal lobe as well as by an increased volume of white matter pathways in the right hemisphere. No increase in white matter volume was seen in her healthy twin sister, who was scanned twice over the same time period. This case study not only provides further evidence for MIT's effectiveness, but also indicates that intensive treatment can induce functional and structural changes in a right-hemisphere fronto-temporal network.
Article
Full-text available
Beat induction (BI) is the cognitive skill that allows us to hear a regular pulse in music to which we can then synchronize. Perceiving this regularity in music allows us to dance and make music together. As such, it can be considered a fundamental musical trait that, arguably, played a decisive role in the origins of music. Furthermore, BI might be considered a spontaneously developing, domain-specific, and species-specific skill. Although both learning and perception/action coupling were shown to be relevant in its development, at least one study showed that the auditory system of a newborn is able to detect the periodicities induced by a varying rhythm. A related study with adults suggested that hierarchical representations for rhythms (meter induction) are formed automatically in the human auditory system. We will reconsider these empirical findings in the light of the question whether beat and meter induction are fundamental cognitive mechanisms.
Article
Full-text available
The question of whether singing may be helpful for stroke patients with non-fluent aphasia has been debated for many years. However, the role of rhythm in speech recovery appears to have been neglected. In the current lesion study, we aimed to assess the relative importance of melody and rhythm for speech production in 17 non-fluent aphasics. Furthermore, we systematically alternated the lyrics to test for the influence of long-term memory and preserved motor automaticity in formulaic expressions. We controlled for vocal frequency variability, pitch accuracy, rhythmicity, syllable duration, phonetic complexity and other relevant factors, such as learning effects or the acoustic setting. Contrary to some opinion, our data suggest that singing may not be decisive for speech production in non-fluent aphasics. Instead, our results indicate that rhythm may be crucial, particularly for patients with lesions including the basal ganglia. Among the patients we studied, basal ganglia lesions accounted for more than 50% of the variance related to rhythmicity. Our findings therefore suggest that benefits typically attributed to melodic intoning in the past could actually have their roots in rhythm. Moreover, our data indicate that lyric production in non-fluent aphasics may be strongly mediated by long-term memory and motor automaticity, irrespective of whether lyrics are sung or spoken.
Article
Full-text available
Bilinguals must focus their attention to control competing languages. In bilingual aphasia, damage to the fronto-subcortical loop may lead to pathological language switching and mixing and the attrition of the more automatic language (usually L1). We present the case of JZ, a bilingual Basque-Spanish 53-year-old man who, after haematoma in the left basal ganglia, presented with executive deficits and aphasia, characterised by more impaired language processing in Basque, his L1. Assessment with the Bilingual Aphasia Test revealed impaired spontaneous and automatic speech production and speech rate in L1, as well as impaired L2-to-L1 sentence translation. Later observation led to the assessment of verbal and non-verbal executive control, which allowed JZ's impaired performance on language tasks to be related to executive dysfunction. In line with previous research, we report the significant attrition of L1 following damage to the left basal ganglia, reported for the first time in a Basque-Spanish bilingual. Implications for models of declarative and procedural memory are discussed.
Article
Full-text available
It has been reported that patients with severely nonfluent aphasia are better at singing lyrics than speaking the same words. This observation inspired the development of Melodic Intonation Therapy (MIT), a treatment whose effects have been shown, but whose efficacy is unproven and neural correlates remain unidentified. Because of its potential to engage/unmask language-capable regions in the unaffected right hemisphere, MIT is particularly well suited for patients with large left-hemisphere lesions. Using two patients with similar impairments and stroke size/location, we show the effects of MIT and a control intervention. Both interventions' post-treatment outcomes revealed significant improvement in propositional speech that generalized to unpracticed words and phrases; however, the MIT-treated patient's gains surpassed those of the control-treated patient. Treatment-associated imaging changes indicate that MIT's unique engagement of the right hemisphere, both through singing and tapping with the left hand to prime the sensorimotor and premotor cortices for articulation, accounts for its effect over nonintoned speech therapy.
Article
Full-text available
It has been reported for more than 100 years that patients with severe nonfluent aphasia are better at singing lyrics than they are at speaking the same words. This observation led to the development of melodic intonation therapy (MIT). However, the efficacy of this therapy has yet to be substantiated in a randomized controlled trial. Furthermore, its underlying neural mechanisms remain unclear. The two unique components of MIT are the intonation of words and simple phrases using a melodic contour that follows the prosody of speech and the rhythmic tapping of the left hand that accompanies the production of each syllable and serves as a catalyst for fluency. Research has shown that both components are capable of engaging fronto-temporal regions in the right hemisphere, thereby making MIT particularly well suited for patients with large left hemisphere lesions who also suffer from nonfluent aphasia. Recovery from aphasia can happen in two ways: either through the recruitment of perilesional brain regions in the affected hemisphere, with variable recruitment of right-hemispheric regions if the lesion is small, or through the recruitment of homologous language and speech-motor regions in the unaffected hemisphere if the lesion of the affected hemisphere is extensive. Treatment-associated neural changes in patients undergoing MIT indicate that the unique engagement of right-hemispheric structures (e.g., the superior temporal lobe, primary sensorimotor, premotor and inferior frontal gyrus regions) and changes in the connections across these brain regions may be responsible for its therapeutic effect.
Article
Full-text available
The recent discovery of spontaneous synchronization to music in a nonhuman animal (the sulphur-crested cockatoo Cacatua galerita eleonora) raises several questions. How does this behavior differ from nonmusical synchronization abilities in other species, such as synchronized frog calls or firefly flashes? What significance does the behavior have for debates over the evolution of human music? What kinds of animals can synchronize to musical rhythms, and what are the key methodological issues for research in this area? This paper addresses these questions and proposes some refinements to the "vocal learning and rhythmic synchronization hypothesis."
Article
Neural activity phase-locks to rhythm in both music and speech. However, the literature currently lacks a direct test of whether cortical tracking of comparable rhythmic structure is comparable across domains. Moreover, although musical training improves multiple aspects of music and speech perception, the relationship between musical training and cortical tracking of rhythm has not been compared directly across domains. We recorded the electroencephalograms (EEG) from 28 participants (14 female) with a range of musical training who listened to melodies and sentences with identical rhythmic structure. We compared cerebral-acoustic coherence (CACoh) between the EEG signal and single-trial stimulus envelopes (as measure of cortical entrainment) across domains and correlated years of musical training with CACoh. We hypothesized that neural activity would be comparably phase-locked across domains, and that the amount of musical training would be associated with increasingly strong phase locking in both domains. We found that participants with only a few years of musical training had a comparable cortical response to music and speech rhythm, partially supporting the hypothesis. However, the cortical response to music rhythm increased with years of musical training while the response to speech rhythm did not, leading to an overall greater cortical response to music rhythm across all participants. We suggest that task demands shaped the asymmetric cortical tracking across domains.
Book
The study of how the brain processes temporal information is becoming one of the most important topics in systems, cellular, computational, and cognitive neuroscience, as well as in the physiological bases of music and language. During the last and current decade, interval timing has been intensively studied in humans and animals using increasingly sophisticated methodological approaches. The present book will bring together the latest information gathered from this exciting area of research, putting special emphasis on the neural underpinnings of time processing in behaving human and non-human primates. Thus, Neurobiology of Interval Timing will integrate for the first time the current knowledge of both animal behavior and human cognition of the passage of time in different behavioral context, including the perception and production of time intervals, as well as rhythmic activities, using different experimental and theoretical frameworks. The book will the composed of chapters written by the leading experts in the fields of psychophysics, functional imaging, system neurophysiology, and musicology. This cutting-edge scientific work will integrate the current knowledge of the neurobiology of timing behavior putting in perspective the current hypothesis of how the brain quantifies the passage of time across a wide variety of critical behaviors.
Article
Purpose: Apraxia of speech (AOS) is a consequence of stroke that frequently co-occurs with aphasia. Its study is limited by difficulties with its perceptual evaluation and dissociation from co-occurring impairments. This study examined the classification accuracy of several acoustic measures for the differential diagnosis of AOS in a sample of stroke survivors. Method: Fifty-seven individuals were included (mean age = 60.8 ± 10.4 years; 21 women, 36 men; mean months poststroke = 54.7 ± 46). Participants were grouped on the basis of speech/language testing as follows: AOS-Aphasia (n = 20), Aphasia Only (n = 24), and Stroke Control (n = 13). Normalized Pairwise Variability Index, proportion of distortion errors, voice onset time variability, and amplitude envelope modulation spectrum variables were obtained from connected speech samples. Measures were analyzed for group differences and entered into a linear discriminant analysis to predict diagnostic classification. Results: Out-of-sample classification accuracy of all measures was over 90%. The envelope modulation spectrum variables had the greatest impact on classification when all measures were analyzed together. Conclusions: This study contributes to efforts to identify objective acoustic measures that can facilitate the differential diagnosis of AOS. Results suggest that further study of these measures is warranted to determine the best predictors of AOS diagnosis. Supplemental materials: https://doi.org/10.23641/asha.5611309.
Article
Music is part of the human nature, and it is also philogenically relevant to language evolution. Language and music are bound together in the enhancement of important social functions, such as communication, cooperation and social cohesion. In the last few years, there has been growing evidence that music and music therapy may improve communication skills (but not only) in different neurological disorders. One of the plausible reasons concerning the rational use of sound and music in neurorehabilitation is the possibility to stimulate brain areas involved in emotional processing and motor control, such as the fronto-parietal network. In this narrative review, we are going to describe the role of music therapy in improving aphasia and other neurological disorders, underlying the reasons why this tool could be effective in rehabilitative settings, especially in individuals affected by stroke.
Article
The evolution of vocal communication in humans required the emergence of not only voluntary control of the vocal apparatus and a flexible vocal repertoire, but the capacity for vocal learning. All of these capacities are lacking in non-human primates, suggesting that the vocal brain underwent significant modifications during human evolution. We review research spanning from early neurophysiological descriptions of great apes to the state of the art in human neuroimaging on the neural organization of the larynx motor cortex, the major regulator of vocalization for both speech and song in humans. We describe changes to the location, structure, function, and connectivity of the larynx motor cortex in humans compared with non-human primates, including critical gaps in the current understanding of the brain systems mediating vocal control and vocal learning. We explore a number of models of the origins of the vocal brain that incorporate findings from comparative neuroscience, and conclude by presenting a summary of contemporary hypotheses that can guide future research.
Chapter
This chapter explores current evidence supporting basal ganglia’s involvement in language processing. We begin with a review of the anatomy of the basal ganglia loops and discuss two prefrontal cortex loops potentially supporting language functions. Specifically, we consider the pre-supplementary motor area (pre-SMA) loop, Broca’s area loop, as well as white matter connectivity between these cortical areas. Considering current evidence, we propose that the pre-SMA loop may be involved in internally guided selection of lexical items, while Broca’s area–basal ganglia circuitry may support selection of appropriate phonological and articulatory representations of these items. White matter connections between Broca’s area and pre-SMA may enable information transfer between these two prefrontal cortex–basal ganglia loops supporting language functions.
Article
Language, more than anything else, is what makes us human. It appears that no communication system of equivalent power exists elsewhere in the animal kingdom. Any normal human child will learn a language based on rather sparse data in the surrounding world, while even the brightest chimpanzee, exposed to the same environment, will not. Why not? How, and why, did language evolve in our species and not in others? Since Darwin's theory of evolution, questions about the origin of language have generated a rapidly-growing scientific literature, stretched across a number of disciplines, much of it directed at specialist audiences. The diversity of perspectives - from linguistics, anthropology, speech science, genetics, neuroscience and evolutionary biology - can be bewildering. Tecumseh Fitch cuts through this vast literature, bringing together its most important insights to explore one of the biggest unsolved puzzles of human history.
Article
Introduction Classical attempts to capture the nature of aphasia have been “corticocentric” in identifying language processes. Recent conceptual and technical advances force us to reconsider the neural bases of language. In this work we aim to review several studies of damaged subcortical structures (thalamus and basal ganglia) that show large effects in language performance. Different sources of evidence have shown the close relationship between damage to the thalamus, and the basal ganglia, and language deficits. The observation of acquired lesions such as cerebro-vascular accidents and traumatic brain injuries, degenerative processes such as dementia, and treatment studies for aphasia are starting to bring some insight into the contribution of these subcortical structures to the language network (Damasio et al., 1982; Weiller et al., 1993; Luzzatti et al., 2006). However, so far, scattered information and different methodological approaches make it difficult to build up a clear picture. After our exploration of the relationship between aphasia and lesion site when this affects the thalamus and the basal ganglia and the comparison of language profile of both damaged structures we found that the incidence of aphasia as a consequence of lesions affecting subcortical portions is similar to that of cortical insult. We feel that this has important consequences for both assessment and recovery. The review also exposes the need to add association tracts to the discussion. Methods The literature review conducted on PubMed and Medline included the key words ‘aphasia’ AND ‘lesion’ AND (‘thalamus’ OR ‘basal ganglia’). After a first search, which retrieved 159 results (80 on thalamus, 79 on basal ganglia), we applied the following inclusion criteria: articles referring to adult acquired aphasia (rather than congenital deficits) written in English. A total of 43 studies were finally included (18 referring to the thalamus, 25 to the basal ganglia). Two were literature reviews and 41 reported the results of either case or group studies in typologically different languages, including among others Chinese, Dutch, English, German, Japanese, Portuguese, Serbian, and Turkish. A final set of data of 682 individuals (288 with thalamus lesion and 394 with basal ganglia lesion) was classified according to the presence/absence of assessable language deficits. When possible, subcortical damage was further specified. Results Taken together, the results indicate that aphasia is a common outcome after a lesion to subcortical structures. Findings show that 110 out of 394 aphasic patients with lesion in the basal ganglia exhibited comprehension deficits, while 31 participants out of 288 with thalamic aphasia. Likewise, 129 aphasics of affected basal ganglia out of 394 had impaired naming, whereas 12 participants had impaired naming out of 288 individuals with thalamic aphasia. See figure 1. Figure 1: The percentage of language impairment in two sets of aphasic patients (the thalamus and the basal ganglia). Despite contradictory results and even cases of double dissociation (for an example of absence of language deficits in the event of thalamic lesions see Cappa et al., 1986), our literature review confirms the major role of subcortical structures in language processing.
Article
Neurodegenerative changes of the basal ganglia in idiopathic Parkinson's disease (IPD) lead to motor deficits as well as general cognitive decline. Given these impairments, the question arises as to whether motor and nonmotor deficits can be ameliorated similarly. We reason that a domain-general sensorimotor circuit involved in temporal processing may support the remediation of such deficits. Following findings that auditory cuing benefits gait kinematics, we explored whether reported language-processing deficits in IPD can also be remediated via auditory cuing. During continuous EEG measurement, an individual diagnosed with IPD heard two types of temporally predictable but metrically different auditory beat-based cues: a march, which metrically aligned with the speech accent structure, a waltz that did not metrically align, or no cue before listening to naturally spoken sentences that were either grammatically well formed or were semantically or syntactically incorrect. Results confirmed that only the cuing with a march led to improved computation of syntactic and semantic information. We infer that a marching rhythm may lead to a stronger engagement of the cerebello-thalamo-cortical circuit that compensates dysfunctional striato-cortical timing. Reinforcing temporal realignment, in turn, may lead to the timely processing of linguistic information embedded in the temporally variable speech signal. © 2014 New York Academy of Sciences.
Article
Abstract Clinically, we know that some aphasic patients can sing well despite their speech disturbances. Herein, we report 10 patients with non-fluent aphasia, of which half of the patients improved their speech function after singing training. We studied ten patients with non-fluent aphasia complaining of difficulty finding words. All had lesions in the left basal ganglia or temporal lobe. They selected the melodies they knew well, but which they could not sing. We made a new lyric with a familiar melody using words they could not name. The singing training using these new lyrics was performed for 30 minutes once a week for 10 weeks. Before and after the training, their speech functions were assessed by language tests. At baseline, 6 of them received positron emission tomography to evaluate glucose metabolism. Five patients exhibited improvements after intervention; all but one exhibited intact right basal ganglia and left temporal lobes, but all exhibited left basal ganglia lesions. Among them, three subjects exhibited preserved glucose metabolism in the right temporal lobe. We considered that patients who exhibit intact right basal ganglia and left temporal lobes, together with preserved right hemispheric glucose metabolism, might be an indication of the effectiveness of singing therapy.
Article
The paper aims to shed light on how serial order is computed in the human mind/brain, focusing on the nature of linearization in language. Linearization is here understood as the mapping of hierarchical syntactic structures onto linear strings. We take as our point of departure the now well-established need to subdivide Broca's region into different areas, and claim that these brain areas play important and distinct roles in the context of linearization. Crucially, for this mapping to be valid, linearization must be decomposed into a series of distinct (generic) sub-operations. Thus, the present work highlights the benefit of decomposing Broca's area and the linearization algorithm in parallel to formulate linking hypotheses between mind and brain.
Article
Any account of "what is special about the human brain" (Passingham 2008) must specify the neural basis of our unique ability to produce speech and delineate how these remarkable motor capabilities could have emerged in our hominin ancestors. Clinical data suggest the basal ganglia provide a platform for the integration of primate-general mechanisms of acoustic communication with the faculty of articulate speech in humans. Furthermore, neurobiological and paleoanthropological data point at a two-stage model of the phylogenetic evolution of this crucial prerequisite of spoken language: (i) monosynaptic refinement of the projections of motor cortex to the brainstem nuclei that steer laryngeal muscles, presumably, as part of a "phylogenetic trend" associated with increasing brain size during hominin evolution, (ii) subsequent vocal-laryngeal elaboration of cortico-basal ganglia circuitries, driven by human-specific FOXP2 mutations. This concept implies vocal continuity of spoken language evolution at the motor level, elucidating the deep entrenchment of articulate speech into a "nonverbal matrix" (Ingold 1994) which is not accounted for by gestural-origin theories. Moreover, it provides a solution to the question for the adaptive value of the "first word" (Bickerton 2009) since even the earliest and most simple verbal utterances must have increased the versatility of vocal displays afforded by the preceding elaboration of monosynaptic corticobulbar tracts, giving rise to enhanced social cooperation and prestige. At the ontogenetic level, the proposed model assumes age-dependent interactions between the basal ganglia and their cortical targets, similar to vocal learning in some songbirds. In this view, the emergence of articulate speech builds on the "renaissance" of an ancient organizational principle and, hence, may represent an example of "evolutionary tinkering" (Jacob 1977).
Article
There is now a vigorous debate over the evolutionary status of music. Some scholars argue that humans have been shaped by evolution to be musical, while others maintain that musical abilities have not been a target of natural selection but reflect an alternative use of more adaptive cognitive skills. One way to address this debate is to break music cognition into its underlying components and determine whether any of these are innate, specific to music, and unique to humans. Taking this approach, Justus and Hutsler (2005) and McDermott and Hauser (2005) suggest that musical pitch perception can be explained without invoking natural selection for music. However, they leave the issue of musical rhythm largely unexplored. This comment extends their conceptual approach to musical rhythm and suggests how issues of innateness, domain specificity, and human specificity might be addressed. © 2006 by the Regents of the University of California. All Right Reserved.
Article
We used H215O PET to characterize the interaction of words and melody by comparing brain activity measured while subjects spoke or sang the words to a familiar song. Relative increases in activity during speaking vs singing were observed in the left hemisphere, in classical perisylvian language areas including the posterior superior temporal gyrus, supramarginal gyrus, and frontal operculum, as well as in Rolandic cortices and putamen. Relative increases in activity during singing were observed in the right hemisphere: these were maximal in the right anterior superior temporal gyrus and contiguous portions of the insula; relative increases associated with singing were also detected in the right anterior middle temporal gyrus and superior temporal sulcus, medial and dorsolateral prefrontal cortices, mesial temporal cortices and cerebellum, as well as in Rolandic cortices and nucleus accumbens. These results indicate that the production of words in song is associated with activation of regions within right hemisphere areas that are not mirror-image homologues of left hemisphere perisylvian language areas, and suggest that multiple neural networks may be involved in different aspects of singing. Right hemisphere mechanisms may support the fluency-evoking effects of singing in neurological disorders such as stuttering or aphasia.
Article
Objectives: To identify corticobulbar tract changes that may predict chronic dysarthria in young people who have sustained a traumatic brain injury (TBI) in childhood using diffusion MRI tractography. Methods: We collected diffusion-weighted MRI data from 49 participants. We compared 17 young people (mean age 17 years, 10 months; on average 8 years postinjury) with chronic dysarthria who sustained a TBI in childhood (range 3-16 years) with 2 control groups matched for age and sex: 1 group of young people who sustained a traumatic injury but had no subsequent dysarthria (n = 15), and 1 group of typically developing individuals (n = 17). We performed tractography from spherical seed regions within the precentral gyrus white matter to track: 1) the hand-related corticospinal tract; 2) the dorsal corticobulbar tract, thought to correspond to the lips/larynx motor representation; and 3) the ventral corticobulbar tract, corresponding to the tongue representation. Results: Despite widespread white matter damage, radial (perpendicular) diffusivity within the left dorsal corticobulbar tract was the best predictor of the presence of dysarthria after TBI. Diffusion metrics in this tract also predicted speech and oromotor performance across the whole group of TBI participants, with additional significant contributions from ventral speech tract volume in the right hemisphere. Conclusion: An intact left dorsal corticobulbar tract seems crucial to the normal execution of speech long term after acquired injury. Examining the speech-related motor pathways using diffusion-weighted MRI tractography offers a promising prognostic tool for people with acquired, developmental, or degenerative neurologic conditions likely to affect speech.
Article
This review summarizes recent experiments on neuronal mechanisms underlying goal-directed behaviour. We investigated two basic processes, the internally triggered initiation of movement and the processing of reward information. Single neurons in the striatum (caudate nucleus, putamen and ventral striatum) were activated a few seconds before self-initiated movements in the absence of external triggering stimuli. Similar activations were observed in the closely connected cortical supplementary motor area, suggesting that these activations might evolve through build up in fronto-basal ganglia loops. They may relate to intentional states directed at movements and their outcomes. As a second result, neurons in the striatum were activated in relation to the expectation and detection of rewards. Since rewards constitute important goals of behaviour, these activitations might reflect the evaluation of outcome before the behavioural reaction is executed. Thus neurons in the basal ganglia are involved in individual components of goal-directed behaviour.
Article
A semantic anomaly judgement test was used to test the hypothesis that sentence comprehension errors by agrammatic aphasics arise as a consequence of faulty mapping from syntactic functions to thematic roles. In one condition, anomalies arose out of a thematic role reversal which was carried by the syntactic structure (e.g., # The worm swallowed the bird). Mis-mapping in these cases would have the effect of altering plausibility and hence resulting in erroneous judgments. In a second condition, mismapping was of no consequence (e.g., # The cat divorced the milk). Effects of sentence length (“Padding”) and of the transparency with which thematic roles are syntactically encoded (“Moved-arguments”) were examined across both types of anomaly. Overall, the performance pattern of agrammatics reveals considerable sensitivity to syntactic structure per se. Their difficulty seems to lie in the utilization of syntactic information for the assignment of thematic roles, particularly where the syntactic relationship between the verb and its noun arguments is not transparently evident in surface structure.
Article
This paper deals with the relationship between subject agreement and extracted subjects. In some languages (local) extraction of the subject triggers the Anti-Agreement Effect (AAE), whereby the verb cannot agree with the extracted subject; instead, the verb has an invariable (third person singular) form. It is argued that the AAE is a strategy used by some Null Subject languages, in particular those which locally move theirwh-subjects in the syntax, to avoid the licensing of a resumptive pro in the closest subject position. This strategy is necessary because a resumptive pro in this position would be accessible to (A-) binding by the movedwh-subject, in violation of an A-disjointness requirement on the distribution of pronomonal elements. It is argued, following Aoun and Li (1989), that the latter incorporates a Minimality effect, necessary to account for the fact that the presence of negation helps undo the AAE, obligatorily in some languages and optionally in others.
Article
For more than 100 years, clinicians have noted that patients with nonfluent aphasia are capable of singing words that they cannot speak. Thus, the use of melody and rhythm has long been recommended for improving aphasic patients' fluency, but it was not until 1973 that a music-based treatment [Melodic Intonation Therapy (MIT)] was developed. Our ongoing investigation of MIT's efficacy has provided valuable insight into this therapy's effect on language recovery. Here we share those observations, our additions to the protocol that aim to enhance MIT's benefit, and the rationale that supports them.
Article
Recovery from aphasia can be achieved through recruitment of either perilesional brain regions in the affected hemisphere or homologous language regions in the nonlesional hemisphere. For patients with large left-hemisphere lesions, recovery through the right hemisphere may be the only possible path. The right-hemisphere regions most likely to play a role in this recovery process are the superior temporal lobe (important for auditory feedback control), premotor regions/posterior inferior frontal gyrus (important for planning and sequencing of motor actions and for auditory-motor mapping), and the primary motor cortex (important for execution of vocal motor actions). These regions are connected reciprocally via a major fiber tract called the arcuate fasciculus (AF), however, this tract is not as well developed in the right hemisphere as it is in the dominant left. We tested whether an intonation-based speech therapy (i.e., melodic intonation therapy [MIT]), which is typically administered in an intense fashion with 75-80 daily therapy sessions, would lead to changes in white-matter tracts, particularly the AF. Using diffusion tensor imaging (DTI), we found a significant increase in the number of AF fibers and AF volume comparing post- with pretreatment assessments in six patients that could not be attributed to scan-to-scan variability. This suggests that intense, long-term MIT leads to remodeling of the right AF and may provide an explanation for the sustained therapy effects that were seen in these six patients.
Article
Perception of musical rhythms is culturally universal. Despite this special status, relatively little is known about the neurobiology of rhythm perception, particularly with respect to beat processing. Findings are presented here from a series of studies that have specifically examined the neural basis of beat perception, using functional magnetic resonance imaging (fMRI) and studying patients with Parkinson's disease. fMRI data indicate that novel beat-based sequences robustly activate the basal ganglia when compared to irregular, nonbeat sequences. Furthermore, although most healthy participants find it much easier to discriminate changes in beat-based sequences compared to irregular sequences, Parkinson's disease patients fail to show the same degree of benefit. Taken together, these data suggest that the basal ganglia are performing a crucial function in beat processing. The results of an additional fMRI study indicate that the role of the basal ganglia is strongly linked to internal generation of the beat. Basal ganglia activity is greater when participants listen to rhythms in which internal generation of the beat is required, as opposed to rhythms with strongly externally cued beats. Functional connectivity between part of the basal ganglia (the putamen) and cortical motor areas (premotor and supplementary motor areas) is also higher during perception of beat rhythms compared to nonbeat rhythms. Increased connectivity between cortical motor and auditory areas is found in those with musical training. The findings from these converging methods strongly implicate the basal ganglia in processing a regular beat, particularly when internal generation of the beat is required.