Article

Finding Your Voice: A Singing Lesson From Functional Imaging

Wiley
Human Brain Mapping
Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Vocal singing (singing with lyrics) shares features common to music and language but it is not clear to what extent they use the same brain systems, particularly at the higher cortical level, and how this varies with expertise. Twenty-six participants of varying singing ability performed two functional imaging tasks. The first examined covert generative language using orthographic lexical retrieval while the second required covert vocal singing of a well-known song. The neural networks subserving covert vocal singing and language were found to be proximally located, and their extent of cortical overlap varied with singing expertise. Nonexpert singers showed greater engagement of their language network during vocal singing, likely accounting for their less tuneful performance. In contrast, expert singers showed a more unilateral pattern of activation associated with reduced engagement of the right frontal lobe. The findings indicate that singing expertise promotes independence from the language network with decoupling producing more tuneful performance. This means that the age-old singing practice of 'finding your singing voice' may be neurologically mediated by changing how strongly singing is coupled to the language system.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Neuroplasticity of music cognition has been far less extensively investigated in epilepsy, despite proximal and partially overlapping fronto-temporal auditory brain networks with language (Brown et al., 2006;Callan et al., 2006;Besson et al., 2007;Schö n et al., 2010;Wilson et al., 2011;Merrill et al., 2012). Given these parallels, evidence for epilepsy-induced language reorganization, and the effects of recurrent seizures on cognition more generally (Marques et al., 2007;Kaaden and Helmstaedter, 2009;Tavakoli et al., 2011), it may be reasonably argued that music functions are susceptible to similar neuroplastic mechanisms. ...
... Given these parallels, evidence for epilepsy-induced language reorganization, and the effects of recurrent seizures on cognition more generally (Marques et al., 2007;Kaaden and Helmstaedter, 2009;Tavakoli et al., 2011), it may be reasonably argued that music functions are susceptible to similar neuroplastic mechanisms. This hypothesis, however, is challenged by the presumed greater bilaterality of music functions (Altenmü ller, 2003;Wan and Schlaug, 2010;Wilson et al., 2011;Merrett and Wilson, 2012). Although both music and language are bilaterally distributed, especially for early stages of auditory processing (Zatorre et al., 2002;Hickok and Poeppel, 2004), music abilities appear less strongly lateralized for later stages of higher cognitive processing. ...
... Although both music and language are bilaterally distributed, especially for early stages of auditory processing (Zatorre et al., 2002;Hickok and Poeppel, 2004), music abilities appear less strongly lateralized for later stages of higher cognitive processing. Thus, tasks involving vocal or instrumental music perception and production primarily engage the fronto-temporal cortices, but also recruit widely distributed bilateral networks throughout the brain (Platel et al., 2003;Brown et al., 2004Brown et al., , 2006Halpern et al., 2004;Callan et al., 2006;Stewart et al., 2008;Wilson et al., 2011). This raises the question of how recurrent unilateral seizures may affect a cognitive domain that is inherently bilaterally organized (Zatorre, 2005). ...
Article
Focal epilepsy is a unilateral brain network disorder, providing an ideal neuropathological model with which to study the effects of focal neural disruption on a range of cognitive processes. While language and memory functions have been extensively investigated in focal epilepsy, music cognition has received less attention, particularly in patients with music training or expertise. This represents a critical gap in the literature. A better understanding of the effects of epilepsy on music cognition may provide greater insight into the mechanisms behind disease- and training-related neuroplasticity, which may have implications for clinical practice. In this cross-sectional study, we comprehensively profiled music and non-music cognition in 107 participants; musicians with focal epilepsy (n = 35), non-musicians with focal epilepsy (n = 39), and healthy control musicians and non-musicians (n = 33). Parametric group comparisons revealed a specific impairment in verbal cognition in non-musicians with epilepsy but not musicians with epilepsy, compared to healthy musicians and non-musicians (P = 0.029). This suggests a possible neuroprotective effect of music training against the cognitive sequelae of focal epilepsy, and implicates potential training-related cognitive transfer that may be underpinned by enhancement of auditory processes primarily supported by temporo-frontal networks. Furthermore, our results showed that musicians with an earlier age of onset of music training performed better on a composite score of melodic learning and memory compared to non-musicians (P = 0.037), while late-onset musicians did not differ from non-musicians. For most composite scores of music cognition, although no significant group differences were observed, a similar trend was apparent. We discuss these key findings in the context of a proposed model of three interacting dimensions (disease status, music expertise, and cognitive domain), and their implications for clinical practice, music education, and music neuroscience research.
... Studies in neuroscience, second language acquisition, music therapy and literacy development all show that music, and in particular singing with words, connects brain networks and has a positive effect on language learning (Jeffries, Fritz & Braun, 2003;Overy, 2006;Schlaug, 2015). Add to this the research on the social and emotional benefits of singing, especially group singing ( de Jong, 2013;Wilson, Abbott, Lusher, Gentle, & Jackson, 2011), and there is little doubt that working with song and music in our ESL and EFL classes can be a rewarding experience for all involved. ...
... Singing plays a part in social cohesion, motivation and group identity (Wilson et al., 2011). Some research suggests that the "preservation of actual words is higher in singing than in storytelling" (Wilson et al., 2011(Wilson et al., , p. 2116. ...
... Singing plays a part in social cohesion, motivation and group identity (Wilson et al., 2011). Some research suggests that the "preservation of actual words is higher in singing than in storytelling" (Wilson et al., 2011(Wilson et al., , p. 2116. Such benefits are not a new thought for those in oral cultures which have long valued singing and music as a means of preserving identity, faith, culture and history. ...
Article
Article for Think Tank Bulletin, Mind Brain Ed SIG of Japan Association of Language Teachers Presented at 3rd Pronunciation Symposium, Wollongong, NSW, 2018.
... The same therapy could lead to different patterns of structural and functional neuroplasticity across individuals who had different brain structure and function to start with. A highly relevant example is the way that the relationship between the singing and language networks in the brain is modulated by singing expertise (Wilson et al., 2011). Since MIT is a singing-based therapy, this variable relationship between the singing and language networks could potentially influence both the efficacy of MIT and the resulting language reorganization. ...
... If so, this could explain why MIT, which includes singing common phrases, would be better placed than other therapies to take advantage of such a system. It is worthwhile noting that singing or intoning activates a bilateral fronto-temporal network that overlaps with the putative mirror neuron system to a certain degree (Ozdemir et al., 2006;Kleber et al., 2007;Wilson et al., 2011). Nonetheless, there is no direct evidence that MIT leverages this system through intonation or hand-tapping. ...
... However, this is difficult to reconcile with the bulk of the neuroimaging findings after MIT treatment presented above. Another study that has investigated the neurocognitive relationship between singing and speaking provides an alternative argument by considering the role of expertise (Wilson et al., 2011). These researchers found that singing expertise is associated with a decoupling of the singing network from the language network, with more focal, left lateralized functional activation for singing that is proximal but posterior to language activation. ...
Article
Full-text available
Singing has been used in language rehabilitation for decades, yet controversy remains over its effectiveness and mechanisms of action. Melodic Intonation Therapy (MIT) is the most well-known singing-based therapy; however, speculation surrounds when and how it might improve outcomes in aphasia and other language disorders. While positive treatment effects have been variously attributed to different MIT components, including melody, rhythm, hand-tapping, and the choral nature of the singing, there is uncertainty about the components that are truly necessary and beneficial. Moreover, the mechanisms by which the components operate are not well understood. Within the literature to date, proposed mechanisms can be broadly grouped into four categories: (1) neuroplastic reorganization of language function, (2) activation of the mirror neuron system and multimodal integration, (3) utilization of shared or specific features of music and language, and (4) motivation and mood. In this paper, we review available evidence for each mechanism and propose that these mechanisms are not mutually exclusive, but rather represent different levels of explanation, reflecting the neurobiological, cognitive, and emotional effects of MIT. Thus, instead of competing, each of these mechanisms may contribute to language rehabilitation, with a better understanding of their relative roles and interactions allowing the design of protocols that maximize the effectiveness of singing therapy for aphasia.
... This situation is compounded by the different cortical organisation of musical functions in musicians and nonmusicians, which is thought to reflect years of training and early exposure to a musical environment in instrumental musicians (Merrett and Wilson, 2012;Mü nte et al., 2002). Despite its ubiquitous nature, less is known about singing with only a few functional imaging studies directly comparing cortical organisation in expert and nonexpert singers (Formby et al., 1989;Kleber et al., 2010;Wilson et al., 2011Wilson et al., , 2012Zarate and Zatorre, 2008). These studies generally support a more focal (or task relevant) pattern of activation in experts, including more refined networks for audioevocal integration (Zarate and Zatorre, 2008), more focal activation in ventral sensorimotor and subcortical motor areas important for kinaesthetic motor control (Kleber et al., 2010), and more specialised circuits in the left hemisphere when singing with lyrics (Wilson et al., 2011, all of which are presumed to underpin more accurate vocal performance. ...
... Despite its ubiquitous nature, less is known about singing with only a few functional imaging studies directly comparing cortical organisation in expert and nonexpert singers (Formby et al., 1989;Kleber et al., 2010;Wilson et al., 2011Wilson et al., , 2012Zarate and Zatorre, 2008). These studies generally support a more focal (or task relevant) pattern of activation in experts, including more refined networks for audioevocal integration (Zarate and Zatorre, 2008), more focal activation in ventral sensorimotor and subcortical motor areas important for kinaesthetic motor control (Kleber et al., 2010), and more specialised circuits in the left hemisphere when singing with lyrics (Wilson et al., 2011, all of which are presumed to underpin more accurate vocal performance. ...
... Singing with lyrics (referred to as vocal singing) provides an ideal paradigm for investigating this issue as it shares features common to both (Schö n et al., 2005). Neuroimaging research of vocal singing in nonmusicians has indicated considerable cortical overlap, suggesting that singing and language functions are proximally located (Brown et al., 2004(Brown et al., , 2006Callan et al., 2006;Ö zdemir et al., 2006;Wilson et al., 2011Wilson et al., , 2012. This is supported by research showing that syntactic processing in language and music rely on activity in the left inferior frontal gyrus (Sammler et al., 2011), and that music training can benefit a range of language skills, such as phonological awareness, tonal language learning, and reading abilities (Degé and Schwarzer, 2011;Moreno et al., 2009;Marie et al., 2011). ...
... that have been reported in previous fMRI studies of phonation, including primary motor cortex (Ozdemir et al., 2006;Loucks et al., 2007;Brown et al., 2008Brown et al., , 2009Peck et al., 2009;Bouchard et al., 2013;Belyk and Brown, 2017;Eichert et al., 2020;Belyk et al., 2021), SMA (Loucks et al., 2007;Brown et al., 2008Brown et al., , 2009Parkinson et al., 2012;Grabski et al., 2013), cerebellum lobule VI (Brown et al., 2009;Belyk et al., 2021), and superior temporal gyrus (Ozdemir et al., 2006;Loucks et al., 2007;Brown et al., 2009;Peck et al., 2009;Parkinson et al., 2012;Belyk et al., 2021). However, the OHC group also exhibited activation in the caudate , inferior frontal gyrus (Ozdemir et al., 2006;Loucks et al., 2007;Peck et al., 2009;Parkinson et al., 2012;Belyk et al., 2021), right insula (Peck et al., 2009;Parkinson et al., 2012), and right premotor cortex (Loucks et al., 2007;Wilson et al., 2011;Belyk et al., 2021). ...
... While the precise role of right PMd in speech production is unclear, there is growing evidence that PMd plays a key role in phonatory control. Indeed, neuroimaging studies have reported activation of the right PMd during singing (Saito et al., 2006;Wilson et al., 2011;Belyk et al., 2021), humming (Hickok et al., 2003), and vowel production (Loucks et al., 2007), as well as during volitional exhalation (Loucks et al., 2007). Furthermore, our right PMd cluster appears to overlap with the 'dorsal precentral speech area' (dPCSA), which has been proposed by Hickok et al. (2023) to serve as a region for coordinating pitch-related vocalizations, such as song and prosody. ...
Article
Full-text available
Introduction Hypophonia is a common feature of Parkinson’s disease (PD); however, the contribution of motor cortical activity to reduced phonatory scaling in PD is still not clear. Methods In this study, we employed a sustained vowel production task during functional magnetic resonance imaging to compare brain activity between individuals with PD and hypophonia and an older healthy control (OHC) group. Results When comparing vowel production versus rest, the PD group showed fewer regions with significant BOLD activity compared to OHCs. Within the motor cortices, both OHC and PD groups showed bilateral activation of the laryngeal/phonatory area (LPA) of the primary motor cortex as well as activation of the supplementary motor area. The OHC group also recruited additional activity in the bilateral trunk motor area and right dorsal premotor cortex (PMd). A voxel-wise comparison of PD and HC groups showed that activity in right PMd was significantly lower in the PD group compared to OHC (p < 0.001, uncorrected). Right PMd activity was positively correlated with maximum phonation time in the PD group and negatively correlated with perceptual severity ratings of loudness and pitch. Discussion Our findings suggest that hypoactivation of PMd may be associated with abnormal phonatory control in PD.
... Limited molecular evidence has identified genetic variants related to music aptitude in humans that are also linked to song learning in songbirds, suggesting shared evolutionary origins for human singing (Jä rvelä , 2018; Kanduri et al., 2015). The occurrence of untrained individuals with high singing pitch accuracy supports an innate predisposition (Larrouy-Maestri et al., 2013;Watts et al., 2003;Wilson et al., 2011). To date, Park et al. (2012) is the only study to explore the genetic basis of singing ability using an objective measure. ...
... Relatedly, Hambrick and Tucker-Drob (2015) found that the heritability of accomplishment was greater in twins who engaged in regular practice, which in turn has a heritable component (Butkovic et al., 2015;Hambrick and Tucker-Drob, 2015;Mosing et al., 2014a). In the case of pitch-accurate singing, our findings suggest that a genetic predisposition may be more likely expressed when an individual is exposed to an early enriched singing and musical environment, potentially leading to changes in the neural networks underpinning singing (Wilson et al., 2011) that may occur through interactions between genes and early environments. These gene-environment interactions take place when children with genetic predispositions are actively or passively exposed to environments that foster these predispositions. ...
Article
Full-text available
Singing ability is a complex human skill influenced by genetic and environmental factors, the relative contributions of which remain unknown. Currently, genetically informative studies using objective measures of singing ability across a range of tasks are limited. We administered a validated online singing tool to measure performance across three everyday singing tasks in Australian twins (n = 1189) to explore the relative genetic and environmental influences on singing ability. We derived a reproducible phenotypic index for singing ability across five performance measures of pitch and interval accuracy. Using this index we found moderate heritability of singing ability (h² = 40.7%) with a striking, similar contribution from shared environmental factors (c² = 37.1%). Childhood singing in the family home and being surrounded by music early in life both significantly predicted the phenotypic index. Taken together, these findings show that singing ability is equally influenced by genetic and shared environmental factors.
... Zarate and Zatorre (2008); Zarate et al. (2010) showed that the network involved in vocal pitch production depended on experience and expertise, as well as the degree of voluntary control (as manipulated by the task in their study): while perception and production tasks generally activated the posterior superior temporal (pSTG) and STS in the temporal lobe, and the posterior inferior frontal gyri (pIFG) in the frontal lobe, instructions to voluntarily compensate for pitch shifts additionally elicited activity in the cingulate cortex and the pre-supplementary motor area, especially in trained singers (Zarate and Zatorre, 2008;Zarate et al., 2010). Wilson et al. (2010) showed a bilateral frontotemporal network, including the inferior and middle frontal gyri, during singing compared to speech production. Ozdemir et al. (2006) showed that vocal production of ''intoned speech'' (singing words) showed stronger activation of an auditory-motor network involving the inferior pre-and postcentral gyrus on both hemispheres as well as the superior temporal, and the most inferior portions of the pIFG on the right more than the left hemisphere in comparison to humming (singing a pitch without words; Ozdemir et al., 2006). ...
... The posterior IFG and posterior STG have previously been shown to play an important role in pitch production and vocal pitch regulation (Zarate and Zatorre, 2008;Peck et al., 2009;Wilson et al., 2010). In addition, these regions are structurally abnormal in gray matter concentration and cortical thickness in both right and left hemisphere regions among individuals who have problems matching a pitch or singing in tune with others, a disorder commonly referred to as tone-deafness (Hyde et al., 2006(Hyde et al., , 2007Mandell et al., 2007). ...
Article
Full-text available
Perceiving and producing vocal sounds are important functions of the auditory-motor system and are fundamental to communication. Prior studies have identified a network of brain regions involved in pitch production, specifically pitch matching. Here we reverse engineer the function of the auditory perception-production network by targeting specific cortical regions (e.g., right and left posterior superior temporal (pSTG) and posterior inferior frontal gyri (pIFG)) with cathodal transcranial direct current stimulation (tDCS)—commonly found to decrease excitability in the underlying cortical region—allowing us to causally test the role of particular nodes in this network. Performance on a pitch-matching task was determined before and after 20 min of cathodal stimulation. Acoustic analyses of pitch productions showed impaired accuracy after cathodal stimulation to the left pIFG and the right pSTG in comparison to sham stimulation. Both regions share particular roles in the feedback and feedforward motor control of pitched vocal production with a differential hemispheric dominance.
... The functional images were pre-processed and analysed using Statistical Parametric Mapping Software (SPM8r6313; Wellcome Trust Centre for Neuroimaging, London, UK) with the aid of the iBrain TM analysis toolbox for SPM (Abbott et al., 2011). Due to T1bleedthrough between adjacent interleaved slices of each volume, the slices obtained in the second half of each TR (alternate slices) were excluded from the analysis, effectively creating a 3mm gap between slices. ...
... ICA was able to uncover task-modulated activity during singing not detected in the conventional analyses, including deactivation of peri-lesional regions in the left hemisphere and activation of right frontal, superior medial frontal and bilateral auditory regions. This suggests that Case B may have been deactivating disordered left frontal regions and recruiting right hemisphere regions of the language and singing networks (Wilson, Abbott, Lusher, Gentle, & Jackson, 2011) in order to attempt the singing tasks. The fact that Case B showed significant task-related activity for singing attempts but not for speaking might reflect that he was able to "hum", but was not able to speak or sing any words. ...
Article
Background: Music interventions for aphasia, such as Melodic Intonation Therapy, have often been criticised for a lack of high-quality evidence regarding their efficacy and mechanisms of action. However, attempts to evaluate these interventions and produce an evidence base for or against their use have proven challenging. Aims: We discuss both the challenges in obtaining research evidence and some possible solutions, taking into perspective differences between clinical and research approaches in their aims, orientation, and methodology. Research is generally focused on standardisation, generalisability, and the provision of adequately powered and statistically sound evidence. In contrast, clinical work is usually client-centric, requiring flexibility to address the needs of the individual patient. To illustrate these points, we present case studies of two individuals with chronic post-stroke aphasia, who were pilot participants for a music intervention study. Methods and Procedures: These patients received research-oriented treatment with a standardised audio-visual Melodic Intonation Therapy protocol delivered via DVD over 6 weeks. They underwent comprehensive language assessments before and after therapy, which included functional neuroimaging for one individual. Outcomes and Results: This standardised approach provided modest clinical benefit in one case, although this was not captured in standardised outcome assessments. For the second, more severely aphasic participant, there was no observable benefit, possibly because the standardised approach did not provide the flexibility needed to deal with the severity of his deficit. Conclusions: Through presentation of these case examples, we highlight how heterogeneous clinical presentations and individual differences pose challenges to standardised research designs. We then offer suggestions for how these factors might be accommodated within rigorous research designs to provide a better evidence base for aphasia interventions.
... Neuroimaging studies demonstrated that the learning and production of song (song control system) depend on the action of diverse areas of the brain, acting in a specific neural network, to grant meaning to musicality, conceptualized as the ability to generate meaning through making expressive music 14 . Prior research on song production has conducted with a focus on the alteration of its melodic structure, harmonization, amusia, and the persistence of song and musicality in patients with Broca's aphasia 1 . ...
... Amusia has been another focus of studies on studying, based on neuroimaging findings. Amusia is the partial or total difficulty in perceiving melodic sounds or rhythms, due to a dysfunction in the neural processing of music 14 . Terao et al. 9 related a case of amusia in a professional tango singer following a cerebrovascular event. ...
Article
Full-text available
It is assumed that singing is a highly complex activity, which requires the activation and interconnection of sensorimotor areas. The aim of the current research was to present the evidence from neuroimaging studies in the performance of the motor and sensory system in the process of singing. Research articles on the characteristics of human singing analyzed by neuroimaging, which were published between 1990 and 2016, and indexed and listed in databases such as PubMed, BIREME, Lilacs, Web of Science, Scopus, and EBSCO were chosen for this systematic review. A total of 9 articles, employing magnetoencephalography, functional magnetic resonance imaging, positron emission tomography, and electrocorticography were chosen. These neuroimaging approaches enabled the identification of a neural network interconnecting the spoken and singing voice, to identify, modulate, and correct pitch. This network changed with the singer's training, variations in melodic structure and harmonized singing, amusia, and the relationship among the brain areas that are responsible for speech, singing, and the persistence of musicality. Since knowledge of the neural networks that control singing is still scarce, the use of neuroimaging methods to elucidate these pathways should be a focus of future research.
... Neuroscientists claim that playing music is the brain's equivalent of a full-body workout, musicians use more parts of their brain simultaneously to complete tasks, and the musician's brain is used as a model for the study of neuroplasticity that is the adaptive capability of the central nervous system. Empirical research findings confirmed that music making integrates multiple brain systems; it primes the brain for learning and causes development that benefits the whole person (Wilson, Abbott, Lusher, Gentle, & Jackson, 2011;Merritt & Wilson, 2011;Wilson, 2013). ...
... Having a culturally rich and linguistically diverse musical life, and having taught for more than 30 years as a music educator, I am in an advantageous position to advocate for music in education. Empirical research documents many benefits of music in education (Stevens & McPherson, 2004Asmus, 2005;Willingham, 2009;Kallio, 2009;Letts, 2012a;Merrett & Wilson, 2011;Wilson et al., 2011;Wilson, 2013;Stevens & Stefanakis, 2014;Henriksson-Macaulay, 2014). Personally, music shaped my empathetic personality. ...
Article
Full-text available
Abstract: I am a Chinese-Australian musician-educator of over 30 years. In this autoethnography, I act as an agent of change by presenting my life as a social project. This assists understanding of a larger relational, communal and political world that moves us to critical engagement, social action and change. Evolutionary psychology asserts that language has evolved from the use of music. Empirical research maintains that children who start learning music early become better learners of languages. Music psychologists argue that music education is crucial in identity construction. Being a multi-lingual, multi-instrumental, and multi-occupational individual, I consider myself primarily a musician. My various identities in music as pianist-accompanist, singer, choral conductor, composer, and dance instructor informed and shaped my other identities as psychotherapist, interpreter-translator, author, teacher and academic researcher. The findings of this study suggest that formal/informal musical engagement fosters executive brain functions that determine my learning outcomes. My research contributes to the national debate about the benefits of music in education. It addresses a research gap identified about the effect of musical engagement on identity formation, and learning in other curriculum areas. The findings can assist music advocacy and provide insight to the preparation of future educators for multicultural Australia. Keywords: Autoethnography, Social Change, Confucianism, Identity Formation, Music and Music Education, Formal and Informal Learning.
... Songbirds have critical periods early in life where species-specific songs must be learnt, which serves analogous social and communicative functions to language in humans (Bolhuis et al., 2010;Doupe & Kuhl, 1998). Notably, the neurocognitive networks subserving singing and language in humans have been shown to be proximally located, with their overlap partly influenced by the perceptual or productive nature of the task and the individual's level of speech proficiency or singing expertise (Brown et al., 2006;Ozdemir et al., 2006;Pitkäniemi et al., 2023;Whitehead & Armony, 2018;Wilson et al., 2011). ...
Article
Full-text available
As with many other musical traits, the social environment is a key influence on the development of singing ability. While the familial singing environment is likely to be formative, its role relative to other environmental influences such as training is unclear. We used structural equation modeling to test relationships among demographic characteristics, familial environmental variables (early and current singing with family), vocal training, and singing ability in a large, previously documented sample of Australian twins (N = 1163). Notably, early singing with family, and to a lesser extent vocal training, predicted singing ability, whereas current singing with family did not. Early familial singing also mediated the relationship between sex and singing ability, with men who sang less with family during childhood showing poorer ability. Bivariate twin models between early familial singing and singing ability showed the phenotypic correlation was largely explained by shared environmental influences. This raises the possibility of a sensitive period for singing ability, with sociocultural expectations around singing potentially differentiating the developmental trajectories of this skill for men and women
... In terms of the current development status of Chinese vocal music career, the lack of a good artistic atmosphere, and with the enhancement of the requirements of the art market, the traditional vocal music teaching methods have been unable to meet the current demand for diversified development of the singing ability of vocal talents, and urgently need reform and innovation [5][6][7]. Therefore, China's vocal music career toward the diversification of the direction of development in teaching must be clear about the diversity of vocal art characteristics so as to realize the innovative development of vocal art [8][9]. The integration of an audio-visual multi-sensory vocal singing teaching mode strengthens the singing ability of vocal music majors and promotes reform and innovation in the teaching career. ...
Article
Full-text available
To cultivate students’ perception of music culture through audio-visual multi-sensory methods in college vocal singing teaching is a new way to understand music works and cultivate music culture consciousness. The flipped classroom is the basis for this paper, which aims to innovate the teaching of vocal singing in colleges and universities. The linear prediction of Mel’s cepstral coefficient is used to extract the auditory features of the music symbols of the vocal singing video in the flipped classroom so that the students can master the relevant skills and pronunciation of vocal singing. To help students understand vocal singing in its specific context, the time-frequency diagram is utilized to analyze the singer’s movements in the vocal singing video. The self-attention mechanism is used to fuse auditory and visual features, which is validated and analyzed for its effectiveness, teaching effect, and satisfaction. The results show that when the added noise signal-to-noise ratio reaches 50dB, the recognition rate of the LPMFCC algorithm for vocal singing music signals is above 80%. Utilizing the flipped classroom for vocal singing teaching, the average score of students’ vocal singing increased to 9.71, and more than 80% of the students believed that the flipped classroom teaching mode could enhance their independent learning ability. The flipped classroom teaching mode, which integrates audio-visual and multi-sensory teaching methods, can enrich the teaching methods of vocal singing in colleges and universities and further enhance students’ interest in vocal singing.
... In the literature, several correlations to classic voice range classification have been suggested, taking into consideration parameters such as body mass, height and laryngeal morphology [23] [52] [53]. All these correlations were performed using a variety of methods, such as video laryngeal endoscopy [54], stroboscopy [55], radiographic imaging], or even ultrasound imaging [25] [26] [56] [57] [58] [59] [60]. ...
... Algunos autores sostienen que la música y el lenguaje se procesan con una red neuronal común abarcando el procesamiento melódico, léxico y fonológico (Schön, Gordon, Campagne, Magne, Astésano, Anton y Besson, 2010). Cuando se canta con letra se activan notoriamente redes neuronales del hemisferio izquierdo (Wilson, 2011), que es donde se encuentran las áreas de Broca y Wernicke, implicadas en el procesamiento del lenguaje. El lóbulo temporal derecho está implicado en las habilidades musicales (Hyde, Peretz y Zatorre, 2008) por tanto, se puede asumir que la música se procesa en distintas áreas y estructuras. ...
Article
Full-text available
Los objetivos de esta investigación se centran en averiguar si surge la alianza terapéutica entre paciente y terapeuta durante un tratamiento de musicoterapia determinado y en proponer una metodología apropiada para el análisis de intervenciones de este tipo. El objetivo de la intervención terapéutica es el de reducir la frecuencia de conductas obsesivas y mejorar la atención. Se han grabado las sesiones y se han codificado las conductas que configuran la alianza terapéutica para obtener datos y analizarlos. El resultado de los estadísticos descriptivos sugiere que es posible crear una relación terapéutica con el tratamiento de musicoterapia propuesto. En conclusión, se demuestra la posibilidad de surgimiento de la alianza terapéutica en musicoterapia, a pesar de prescindir del lenguaje verbal y se abre una línea de investigación.
... Using a dichotic listening paradigm, tone language listeners show a right ear advantage for lexical tones, but a left ear advantage for hummed version of lexical tones, whereas nontone language listeners show a left ear advantage for both types of stimuli (Van Lancker, 1980). Furthermore, nonexperienced singers show greater activation of the language network in the brain during vocal singing, whereas experienced singers recruit different brain areas from the language network during vocal singing (Wilson, Abbott, Lusher, Gentle, & Jackson, 2011), thereby suggesting that the same tasks may elicit dissociations in activation of brain areas depending on musical training. In terms of behavior, nontone language nonmusicians have been shown to have graded discrimination performance on stimuli that are music-like to speech-like-that is, best performance on violin notes with the same pitch contour as lexical tones, followed by low-pass filtered lexical tones and worst performance in naturalistic lexical tones/speech (Burnham et al., 2014). ...
Article
Full-text available
Purpose Evidence suggests that extensive experience with lexical tones or musical training provides an advantage in perceiving nonnative lexical tones. This investigation concerns whether such an advantage is evident in learning nonnative lexical tones based on the distributional structure of the input. Method Using an established protocol, distributional learning of lexical tones was investigated with tone language (Mandarin) listeners with no musical training (Experiment 1) and nontone language (Australian English) listeners with musical training (Experiment 2). Within each experiment, participants were trained on a bimodal (2-peak) or a unimodal (single peak) distribution along a continuum spanning a Thai lexical tone minimal pair. Discrimination performance on the target minimal pair was assessed before and after training. Results Mandarin nonmusicians exhibited clear distributional learning (listeners in the bimodal, but not those in the unimodal condition, improved significantly as a function of training), whereas Australian English musicians did not (listeners in both the bimodal and unimodal conditions improved as a function of training). Conclusions Our findings suggest that veridical perception of lexical tones is not sufficient for distributional learning of nonnative lexical tones to occur. Rather, distributional learning appears to be modulated by domain-specific pitch experience and is constrained possibly by top-down interference.
... These competitive, ambitious, determined and self-sacrificing "music moms" are the architects of their children's musical development (Wang, 2009). Music making integrates multiple brain systems; it primes the brain for learning and promotes development that benefits the whole person (Wilson, Abbott, Lusher, Gentle & Jackson, 2011;Merritt & Wilson, 2011). The extrinsic and instrumental benefits of music education are foster language learning, reading and improving memory (Henriksson-Macaulay, 2014). ...
Article
Full-text available
Abstract This phenomenological study focuses on the effect of parenting rooted in Confucian Heritage Culture (CHC) on music learning of four American-born Chinese siblings from the same family. It investigates parental differential treatment (PDT) and sibling interactions on the musical development of the participants raised by musician parents. Participants’ music practice habits and their learning and performing opportunities can further account for their music identity formation. Hermeneutic inquiries explore how individuals make sense of their experiences. Data consisted of six semi-structured interviews conducted among this family and their email correspondences, and was analysed using Interpretative Phenomenological Analysis. Four overarching themes emerged from the family narratives: parenting rooted in CHC-inspired music learning in this family; parental musicianship and continuous support, combined with kin role modelling and siblings’ sound practice habit positively facilitated their successful music learning; optimal learning and performing opportunities were highly beneficial in the siblings’ music learning process; and sibling relationships affected by PDT did not have long-term aversive effects on their learning or wellbeing. The findings show that PDT occurs among CHC families and children need to have a safe environment to thrive. Scholars and health practitioners should acquire intercultural knowledge to effectively work with families of diverse cultures. Keywords Confucian Heritage Culture, Interpretative Phenomenological Analysis, moral cultivation and self- perfection, music education aims, music learning, parental differential treatment, sibling interaction
... et al., 2012) They define the role of the slave owner as the same as the Pharaoh described in the lyrics clearly showing that the slave owner was not in alignment with God's will.(Caldwell, 2004, Gordon et al., 2011 They showed the strength and power in faith in the potential for change(Wilson et al., 2011 and Sammler et al., 2012). As a result the music and lyrics reduce depression and suicide.(Koenig ...
Article
The goal of this study was to develop the foundation for the creation of a 21st century spiritual which could be used to mitigate the effects of stress and violence.Using a multi-disciplinary team and basing the work in the music of the antebellum Negro Spiritual (a group of 6000 works), reverse engineering, extensive use of engineering principles and utilization of existing databases was done to aid in the analysis of the neurological and physiological impact of the musical form and development of an applicable theory.
... et al., 2012) They define the role of the slave owner as the same as the Pharaoh described in the lyrics clearly showing that the slave owner was not in alignment with God's will.(Caldwell, 2004, Gordon et al., 2011 They showed the strength and power in faith in the potential for change(Wilson et al., 2011 and Sammler et al., 2012). As a result the music and lyrics reduce depression and suicide.(Koenig ...
Article
Full-text available
The goal of this study was to develop the foundation for the creation of a 21st century spiritual which could be used to mitigate the effects of stress and violence. Using a multi-disciplinary team and basing the work in the music of the antebellum Negro Spiritual (a group of 6000 works), reverse engineering, extensive use of engineering principles and utilization of existing databases was done to aid in the analysis of the neurological and physiological impact of the musical form and development of an applicable theory.
... Expressive and receptive musical functions have repeatedly been shown to recruit a distributed perisylvian network that at least partially overlaps with regions involved in elementary speech processing (e.g. Brown et al., 2006;Rogalsky et al., 2011;Sammler et al., 2009;Wilson et al., 2011). As a consequence, the question arises whether and to what extent musical training transfers to basic processing of speech. ...
... Another potential explanation of the results differentiating between processing of foreign-accented speech between first-and second-language speakers could be that there is recruitment of extra neural resources when undertaking tasks for which we are not trained. It has been shown, for example, that experienced singers, in which much of the processing is automated, show reduced activity relative to non-experienced singers (Wilson et al., 2011). It is unlikely that the results of our study can be explained by differences in task training and expertise, as the foreign-accented speech was difficult for both the English and Japanese groups, and the subjects had the same amount of training on the phonetic categorization task. ...
Article
Full-text available
Brain imaging studies indicate that speech motor areas are recruited for auditory speech perception, especially when intelligibility is low due to environmental noise or when speech is accented. The purpose of the present study was to determine the relative contribution of brain regions to the processing of speech containing phonetic categories from one's own language, speech with accented samples of one's native phonetic categories, and speech with unfamiliar phonetic categories. To that end, native English and Japanese speakers identified the speech sounds /r/ and /l/ that were produced by native English speakers (unaccented) and Japanese speakers (foreign-accented) while functional magnetic resonance imaging measured their brain activity. For native English speakers, the Japanese accented speech was more difficult to categorize than the unaccented English speech. In contrast, Japanese speakers have difficulty distinguishing between /r/ and /l/, so both the Japanese accented and English unaccented speech were difficult to categorize. Brain regions involved with listening to foreign-accented productions of a first language included primarily the right cerebellum, left ventral inferior premotor cortex PMvi, and Broca's area. Brain regions most involved with listening to a second-language phonetic contrast (foreign-accented and unaccented productions) also included the left PMvi and the right cerebellum. Additionally, increased activity was observed in the right PMvi, the left and right ventral superior premotor cortex PMvs, and the left cerebellum. These results support a role for speech motor regions during the perception of foreign-accented native speech and for perception of difficult second-language phonetic contrasts.
... The target selected by iBrain™ is the image whose within-brain centre-of-mass is located closest to the median of all images in that time series. Target images from each session of a subject will then be non-linearly spatially normalised to a subjectspecific space in an iterative fashion in a manner similar to that described in Wilson & Abbott et al. [47] to ensure unbiased registration of images across sessions; this step is designed to correct, as far as practicable, nonlinear image distortions that may differ from session to session. The step will be undertaken within subject rather than directly to the standard template to maximise the fidelity of within-subject registration. ...
Article
Full-text available
Introduction Children with congenital hemiplegia often present with limitations in using their impaired upper limb which impacts on independence in activities of daily living, societal participation and quality of life. Traditional therapy has adopted a bimanual training approach (BIM) and more recently, modified constraint induced movement therapy (mCIMT) has emerged as a promising unimanual approach. Evidence of enhanced neuroplasticity following mCIMT suggests that the sequential application of mCIMT followed by bimanual training may optimise outcomes (Hybrid CIMT). It remains unclear whether more intensely delivered group based interventions (hCIMT) are superior to distributed models of individualised therapy. This study aims to determine the optimal density of upper limb training for children with congenital hemiplegia. Methods and analyses A total of 50 children (25 in each group) with congenital hemiplegia will be recruited to participate in this randomized comparison trial. Children will be matched in pairs at baseline and randomly allocated to receive an intensive block group hybrid model of combined mCIMT followed by intensive bimanual training delivered in a day camp model (COMBiT; total dose 45 hours direct, 10 hours of indirect therapy), or a distributed model of standard occupational therapy and physiotherapy care (SC) over 12 weeks (total 45 hours direct and indirect therapy). Outcomes will be assessed at 13 weeks after commencement, and retention of effects tested at 26 weeks. The primary outcomes will be bimanual coordination and unimanual upper-limb capacity. Secondary outcomes will be participation and quality of life. Advanced brain imaging will assess neurovascular changes in response to treatment. Analysis will follow standard principles for RCTs, using two-group comparisons on all participants on an intention-to-treat basis. Comparisons will be between treatment groups using generalized linear models. Trial registration ACTRN12613000181707
... Since both singing and speech involve vocalization and analysis of auditory feedback, it is reasonable to ask to what extent they rely on dedicated processes or rather share the same neuronal network (for a review, see Gordon et al., 2006 ). Brain areas underlying speaking and singing significantly overlap in non-musicians (e.g., Brown et al., 2006; Wilson et al., 2010 ). Nevertheless, singing appears to predominantly recruit right-hemisphere regions whereas speech production recruits primarily areas in the left hemisphere. ...
Article
Full-text available
Singing is as natural as speaking for the majority of people. Yet some individuals (i.e., 10–15%) are poor singers, typically performing or imitating pitches and melodies inaccurately. This condition, commonly referred to as “tone deafness,” has been observed both in the presence and absence of deficient pitch perception. In this article we review the existing literature concerning normal singing, poor-pitch singing, and, briefly, the sources of this condition. Considering that pitch plays a prominent role in the structure of both music and speech we also focus on the possibility that speech production (or imitation) is similarly impaired in poor-pitch singers. Preliminary evidence from our laboratory suggests that pitch imitation may be selectively inaccurate in the music domain without being affected in speech. This finding points to separability of mechanisms subserving pitch production in music and language.
Article
The purpose of this study was to ascertain whether seeing the lyrics while learning a difficult song aurally induces less cognitive load in learners compared to not seeing the lyrics, leading to better recall accuracy of the learned song. Cognitive load was assessed through a reaction time measure based on a dual-task paradigm. Recall accuracy of the learned song was measured regarding lyrics, pitches, and rhythm. Thirty-six non-music majors individually learned two songs through prerecorded aural instruction; for one song they saw the lyrics and for the other song they did not see the lyrics. The presentation order of instructional condition and song were counterbalanced. Results showed instructional condition affected cognitive load but not recall accuracy. A path analysis revealed a mediating effect of cognitive load regarding lyrics and rhythm, suggesting seeing the lyrics indirectly increases recall accuracy of lyrics and rhythm through its positive effect on cognitive load. Given limited instructional time, several strategies should be considered to prevent learners from experiencing cognitive overload while learning a difficult song aurally. Showing the lyrics of the difficult song could be one strategy for that purpose, at least for young adults with low levels of musical expertise.
Chapter
Singing provides older persons with opportunities to engage in music performance, whether or not they have done so in the past. It offers the esthetic pleasure of making music, and much more. Singing in a group has been associated with enhanced social bonding, general feeling of well-being, sense of personal growth, reductions in stress, pain, and loneliness, increased interest in life, and heightened immune function. The present chapter reviews research related to these findings focusing primarily on studies of choirs of healthy older persons. Attention is also directed to individual singing lessons for older adults. Whereas studies have suggested that experience playing a musical instrument is associated with cognitive resilience in senior years, few studies have considered the benefits of vocal training from this perspective. Neuroscience reveals that the same basic brain network underlies music making on a musical instrument and with the voice, and this further leads to the speculation that like playing a musical instrument, singing holds promise as a cost-effective way to help maintain and increase health and well-being in older adults.
Chapter
Full-text available
The social context and possible role of music in the history of mankind until today are discussed. It is emphasised that “amusia” (total lack of understanding of the meaning of music) is very infrequent in normal populations. It could be speculated that individuals who have lacked interest in music and have had poor ability to participate in musical activities have had less likelihood to survive than others. Music can have very strong effects in political and societal contexts. The concept multimodality is introduced—concomitant art experiences enforcing physiologically and psychologically the effects of one another. The possible longevity effect of choir singing is discussed in a societal context (effects not only on the singers themselves but also on cohesiveness in the community) using Swedish-speaking East Bothnians (who live much longer than their Finnish-speaking neighbours) as a scientifically studied example.
Chapter
Singing starts many physiological processes in the body. The most frequently studied physiological parameters during singing are heart rate and heart rate variability. Professional singer and amateurs differ particularly with regard to variations in heart rate. The former group has much larger heart rate variability during singing than amateurs. This may reflect differences in breathing technique. Numerous studies have shown increased vitality and relaxation scores after singing. Other studies have shown that singing influences the excretion of certain endocrine and immune factors. For instance, the blood concentration of oxytocin seems to increase during singing. The presence of an audience has a pronounced effect on the singer’s physiology, with an average increase in 20 heart beats per minute when there is an audience.
Article
Humans are the most complex singers in nature, and the human voice is thought by many to be the most beautiful musical instrument. Aside from spoken language, singing represents a second mode of acoustic communication in humans. The purpose of this review article is to explore the functional anatomy of the "singing" brain. Methodologically, the existing literature regarding activation of the human brain during singing was carefully reviewed, with emphasis on the anatomic localization of such activation. Relevant human studies are mainly neuroimaging studies, namely functional magnetic resonance imaging and positron emission tomography studies. Singing necessitates activation of several cortical, subcortical, cerebellar, and brainstem areas, served and coordinated by multiple neural networks. Functionally vital cortical areas of the frontal, parietal, and temporal lobes bilaterally participate in the brain's activation process during singing, confirming the latter's role in human communication. Perisylvian cortical activity of the right hemisphere seems to be the most crucial component of this activation. This also explains why aphasic patients due to left hemispheric lesions are able to sing but not speak the same words. The term clef de sol activation is proposed for this crucial perisylvian cortical activation due to the clef de sol shape of the topographical distribution of these cortical areas around the sylvian fissure. Further research is needed to explore the connectivity and sequence of how the human brain activates to sing.
Article
Full-text available
THE AIM OF THIS STUDY WAS TO EXAMINE THE effects of three acoustic 'parameters on the difficulty of segregating a simple 4-note melody from a background of interleaved distractor notes. Melody segregation difficulty ratings were recorded while three acoustic parameters of the distractor notes were varied separately: intensity, temporal envelope, and spectral envelope. Statistical analyses revealed a significant effect of music training on difficulty rating judgments. For participants with music training, loudness was the most efficient perceptual cue, and no difference was found between the dimensions of timbre influenced by temporal and spectral envelope. For the group of listeners with less music training, both loudness and spectral envelope were the most efficient cues. We speculate that the difference between musicians and nonmusicians may be due to differences in processing the stimuli: musicians may process harmonic sound sequences using brain networks specialized for music, whereas nonmusicians may use speech networks.
Article
Full-text available
An automated coordinate-based system to retrieve brain labels from the 1988 Talairach Atlas, called the Talairach Daemon (TD), was previously introduced [Lancaster et al., 1997]. In the present study, the TD system and its 3-D database of labels for the 1988 Talairach atlas were tested for labeling of functional activation foci. TD system labels were compared with author-designated labels of activation coordinates from over 250 published functional brain-mapping studies and with manual atlas-derived labels from an expert group using a subset of these activation coordinates. Automated labeling by the TD system compared well with authors' labels, with a 70% or greater label match averaged over all locations. Author-label matching improved to greater than 90% within a search range of +/-5 mm for most sites. An adaptive grey matter (GM) range-search utility was evaluated using individual activations from the M1 mouth region (30 subjects, 52 sites). It provided an 87% label match to Brodmann area labels (BA 4 & BA 6) within a search range of +/-5 mm. Using the adaptive GM range search, the TD system's overall match with authors' labels (90%) was better than that of the expert group (80%). When used in concert with authors' deeper knowledge of an experiment, the TD system provides consistent and comprehensive labels for brain activation foci. Additional suggested applications of the TD system include interactive labeling, anatomical grouping of activation foci, lesion-deficit analysis, and neuroanatomy education. (C) 2000 Wiley-Liss, Inc.
Article
Full-text available
Here, I examine to what extend music and speech share processing components by focusing on vocal production, that is, singing and speaking. In shaping my review, the modularity concept has been and continues to play a determinant role. Thus, I will first provide a brief background on the contemporary notion of modularity. Next, I will present evidence that musical abilities depend, in part, on modular processes. The evidence is coming mainly from neuropsychological dissociations. The relevance of findings of overlap in neuroimaging, of interference and domain-transfer effects between music and speech will also be addressed and discussed. Finally, I will contrast the modularity position with the resource-sharing framework proposed by Patel (2003, 2008a). This critical review should be viewed as an invitation to undertake future comparative research between music and language by focusing on the details of the functions that these mechanisms carry out, not only their specificity. Such comparative research is very important not only theoretically but also in practice because of their obvious clinical and educational implications.
Article
Full-text available
The possible links between music and language continue to intrigue sci- entists interested in the nature of these two types of knowledge, their evolution, and their instantiation in the brain. Here we consider music and language from a developmental perspective, focusing on the degree to which similar mechanisms of learning and memory might subserve the acquisition of knowledge in these two domains. In particular, it seems possible that while adult musical and linguistic processes are modular- ized to some extent as separate entities, there may be similar develop- mental underpinnings in both domains, suggesting that modularity is emergent rather than present at the beginning of life. Directions for fu- ture research are considered.
Article
Full-text available
Neuropsychological studies have suggested that imagery processes may be mediated by neuronal mechanisms similar to those used in perception. To test this hypothesis, and to explore the neural basis for song imagery, 12 normal subjects were scanned using the water bolus method to measure cerebral blood flow (CBF) during the performance of three tasks. In the control condition subjects saw pairs of words on each trial and judged which word was longer. In the perceptual condition subjects also viewed pairs of words, this time drawn from a familiar song; simultaneously they heard the corresponding song, and their task was to judge the change in pitch of the two cued words within the song. In the imagery condition, subjects performed precisely the same judgment as in the perceptual condition, but with no auditory input. Thus, to perform the imagery task correctly an internal auditory representation must be accessed. Paired-image subtraction of the resulting pattern of CBF, together with matched MRI for anatomical localization, revealed that both perceptual and imagery. tasks produced similar patterns of CBF changes, as compared to the control condition, in keeping with the hypothesis. More specifically, both perceiving and imagining songs are associated with bilateral neuronal activity in the secondary auditory cortices, suggesting that processes within these regions underlie the phenomenological impression of imagined sounds. Other CBF foci elicited in both tasks include areas in the left and right frontal lobes and in the left parietal lobe, as well as the supplementary motor area. This latter region implicates covert vocalization as one component of musical imagery. Direct comparison of imagery and perceptual tasks revealed CBF increases in the inferior frontal polar cortex and right thalamus. We speculate that this network of regions may be specifically associated with retrieval and/or generation of auditory information from memory.
Article
Full-text available
Singing abilities are rarely examined despite the fact that their study repre-sents one of the richest sources of information regarding how music is processed in the brain. In particular, the analysis of singing performance in brain-damaged patients provides key information regarding the autonomy of music processing relative to language processing. Here, we review the relevant literature, mostly on the perception and memory of text and tunes in songs, and we illustrate how lyrics can be distinguished from melody in singing, in the case of brain damage. We report a new case, G.D., who has a severe speech disorder, marked by phonemic errors and stuttering, with-out a concomitant musical production disorder. G.D. was found to pro-duce as few intelligible words in speaking as in singing familiar songs. Sing-ing "la, la, la" was intact and hence could not account for the speech deficit observed in singing. The results indicate that verbal production, be it sung or spoken, is mediated by the same (impaired) language output system and that this speech route is distinct from the (spared) melodic route. In sum, we provide here further evidence that the autonomy of music and language processing extends to production tasks. S INGING constitutes the most widespread mode of musical expression. All
Article
Full-text available
Here, I examine to what extend music and speech share processing components by focusing on vocal production, that is, singing and speaking. In shaping my review, the modularity concept has been and continues to play a determinant role. Thus, I will first provide a brief background on the contemporary notion of modularity.' Next, I will present evidence that musical abilities depend, in part, on modular processes. The evidence is coming mainly from neuropsychological dissociations. The relevance of findings of overlap in neuroimaging, of interference and domain-transfer effects between music and speech will also be addressed and discussed. Finally, I will contrast the modularity position with the resource-sharing framework proposed by Patel (2003, 2008a). This critical review should be viewed as an invitation to undertake future comparative research between music and language by focusing on the details of the functions that these mechanisms carry out, not only their specificity. Such comparative research is very important not only theoretically but also in practice because of their obvious clinical and educational implications.
Article
Full-text available
Some scholars consider music to exemplify the classic criteria for a complex human adaptation, including universality, orderly development, and special-purpose cortical processes. The present account focuses on processing predispositions for music. The early appearance of receptive musical skills, well before they have obvious utility, is consistent with their proposed status as predispositions. Infants' processing of musical or music-like patterns is much like that of adults. In the early months of life, infants engage in relational processing of pitch and temporal patterns. They recognize a melody when its pitch level is shifted upward or downward, provided the relations between tones are preserved. They also recognize a tone sequence when the tempo is altered so long as the relative durations remain unchanged. Melodic contour seems to be the most salient feature of melodies for infant listeners. However, infants can detect interval changes when the component tones are related by small-integer frequency ratios. They also show enhanced processing for scales with unequal steps and for metric rhythms. Mothers sing regularly to infants, doing so in a distinctive manner marked by high pitch, slow tempo, and emotional expressiveness. The pitch and tempo of mothers' songs are unusually stable over extended periods. Infant listeners prefer the maternal singing style to the usual style of singing, and they are more attentive to maternal singing than to maternal speech. Maternal singing also has a moderating effect on infant arousal. The implications of these findings for the origins of music are discussed.
Article
Full-text available
It has been reported that patients with severely nonfluent aphasia are better at singing lyrics than speaking the same words. This observation inspired the development of Melodic Intonation Therapy (MIT), a treatment whose effects have been shown, but whose efficacy is unproven and neural correlates remain unidentified. Because of its potential to engage/unmask language-capable regions in the unaffected right hemisphere, MIT is particularly well suited for patients with large left-hemisphere lesions. Using two patients with similar impairments and stroke size/location, we show the effects of MIT and a control intervention. Both interventions' post-treatment outcomes revealed significant improvement in propositional speech that generalized to unpracticed words and phrases; however, the MIT-treated patient's gains surpassed those of the control-treated patient. Treatment-associated imaging changes indicate that MIT's unique engagement of the right hemisphere, both through singing and tapping with the left hand to prime the sensorimotor and premotor cortices for articulation, accounts for its effect over nonintoned speech therapy.
Article
Full-text available
This study examined the efficacy of Melodic Intonation Therapy (MIT) in a male singer (KL) with severe Broca’s aphasia. Thirty novel phrases were allocated to one of three experimental conditions: unrehearsed, rehearsed verbal production (repetition), and rehearsed verbal production with melody (MIT). The results showed superior production of MIT phrases during therapy. Comparison of performance at baseline, 1 week, and 5 weeks after therapy revealed an initial beneficial effect of both types of rehearsal; however, MIT was more durable, facilitating longer-term phrase production. Our findings suggest that MIT facilitated KL’s speech praxis, and that combining melody and speech through rehearsal promoted separate storage and/or access to the phrase representation.
Article
Full-text available
Several studies have shown that motor-skill training over extended time periods results in reorganization of neural networks and changes in brain morphology. Yet, little is known about training-induced adaptive changes in the vocal system, which is largely subserved by intrinsic reflex mechanisms. We investigated highly accomplished opera singers, conservatory level vocal students, and laymen during overt singing of an Italian aria in a neuroimaging experiment. We provide the first evidence that the training of vocal skills is accompanied by increased functional activation of bilateral primary somatosensory cortex representing articulators and larynx. Opera singers showed additional activation in right primary sensorimotor cortex. Further training-related activation comprised the inferior parietal lobe and bilateral dorsolateral prefrontal cortex. At the subcortical level, expert singers showed increased activation in the basal ganglia, the thalamus, and the cerebellum. A regression analysis of functional activation with accumulated singing practice confirmed that vocal skills training correlates with increased activity of a cortical network for enhanced kinesthetic motor control and sensorimotor guidance together with increased involvement of implicit motor memory areas at the subcortical and cerebellar level. Our findings may have ramifications for both voice rehabilitation and deliberate practice of other implicit motor skills that require interoception.
Article
Full-text available
The cause of stuttering is unknown. Failure to develop left-hemispheric dominance for speech is a long-standing theory although others implicated the motor system more broadly, often postulating hyperactivity of the right (language nondominant) cerebral hemisphere. As knowledge of motor circuitry has advanced, theories of stuttering have become more anatomically specific, postulating hyperactivity of premotor cortex, either directly or through connectivity with the thalamus and basal ganglia. Alternative theories target the auditory and speech production systems. By contrasting stuttering with fluent speech using positron emission tomography combined with chorus reading to induce fluency, we found support for each of these hypotheses. Stuttering induced widespread overactivations of the motor system in both cerebrum and cerebellum, with right cerebral dominance. Stuttered reading lacked left-lateralized activations of the auditory system, which are thought to support the self-monitoring of speech, and selectively deactivated a frontal-temporal system implicated in speech production. Induced fluency decreased or eliminated the overactivity in most motor areas, and largely reversed the auditory-system underactivations and the deactivation of the speech production system. Thus stuttering is a disorder affecting the multiple neural systems used for speaking.
Article
Full-text available
To evaluate lateralization of speech production at the level of the Rolandic cortex, functional magnetic resonance imaging (1.5 Tesla, 27 parallel axial slices, EPI-technique) was performed during a speech task (continuous silent recitation of the names of the months of the year). As control conditions, non-speech tongue movements and silent singing of a well-known melody with the syllable 'la' as its carrier were considered. Tongue movements produced symmetrical activation at the lower primary motor cortex. During automatic speech a strong functional lateralization to the left hemisphere emerged within the same area. In contrast, singing yielded a predominant right-sided activation of the Rolandic region. Functional lateralization of speech production therefore seems to include the precentral gyrus as well as Broca's area.
Article
Full-text available
In this study, we investigated blood-flow-related magnetic-resonance (MR) signal changes and the time course underlying short-term motor learning of the dominant right hand in ten piano players (PPs) and 23 non-musicians (NMs), using a complex finger-tapping task. The activation patterns were analyzed for selected regions of interest (ROIs) within the two examined groups and were related to the subjects' performance. A functional learning profile, based on the regional blood-oxygenation-level-dependent (BOLD) signal changes, was assessed in both groups. All subjects achieved significant increases in tapping frequency during the training session of 35 min in the scanner. PPs, however, performed significantly better than NMs and showed increasing activation in the contralateral primary motor cortex throughout motor learning in the scanner. At the same time, involvement of secondary motor areas, such as bilateral supplementary motor area, premotor, and cerebellar areas, diminished relative to the NMs throughout the training session. Extended activation of primary and secondary motor areas in the initial training stage (7-14 min) and rapid attenuation were the main functional patterns underlying short-term learning in the NM group; attenuation was particularly marked in the primary motor cortices as compared with the PPs. When tapping of the rehearsed sequence was performed with the left hand, transfer effects of motor learning were evident in both groups. Involvement of all relevant motor components was smaller than after initial training with the right hand. Ipsilateral premotor and primary motor contributions, however, showed slight increases of activation, indicating that dominant cortices influence complex sequence learning of the non-dominant hand. In summary, the involvement of primary and secondary motor cortices in motor learning is dependent on experience. Interhemispheric transfer effects are present.
Article
Full-text available
The present study used positron emission tomography (PET) to examine the cerebral activity pattern associated with auditory imagery for familiar tunes. Subjects either imagined the continuation of nonverbal tunes cued by their first few notes, listened to a short sequence of notes as a control task, or listened and then reimagined that short sequence. Subtraction of the activation in the control task from that in the real-tune imagery task revealed primarily right-sided activation in frontal and superior temporal regions, plus supplementary motor area (SMA). Isolating retrieval of the real tunes by subtracting activation in the reimagine task from that in the real-tune imagery task revealed activation primarily in right frontal areas and right superior temporal gyrus. Subtraction of activation in the control condition from that in the reimagine condition, intended to capture imagery of unfamiliar sequences, revealed activation in SMA, plus some left frontal regions. We conclude that areas of right auditory association cortex, together with right and left frontal cortices, are implicated in imagery for familiar tunes, in accord with previous behavioral, lesion and PET data. Retrieval from musical semantic memory is mediated by structures in the right frontal lobe, in contrast to results from previous studies implicating left frontal areas for all semantic retrieval. The SMA seems to be involved specifically in image generation, implicating a motor code in this process.
Article
We present a unified statistical theory for assessing the significance of apparent signal observed in noisy difference images. The results are usable in a wide range of applications, including fMRI, but are discussed with particular reference to PET images which represent changes in cerebral blood flow elicited by a specific cognitive or sensorimotor task. Our main result is an estimate of the P-value for local maxima of Gaussian, t, χ2 and F fields over search regions of any shape or size in any number of dimensions. This unifies the P-values for large search areas in 2-D (Friston et al. [1991]: J Cereb Blood Flow Metab 11:690–699) large search regions in 3-D (Worsley et al. [1992]: J Cereb Blood Flow Metab 12:900–918) and the usual uncorrected P-value at a single pixel or voxel.
Article
We used H215O PET to characterize the interaction of words and melody by comparing brain activity measured while subjects spoke or sang the words to a familiar song. Relative increases in activity during speaking vs singing were observed in the left hemisphere, in classical perisylvian language areas including the posterior superior temporal gyrus, supramarginal gyrus, and frontal operculum, as well as in Rolandic cortices and putamen. Relative increases in activity during singing were observed in the right hemisphere: these were maximal in the right anterior superior temporal gyrus and contiguous portions of the insula; relative increases associated with singing were also detected in the right anterior middle temporal gyrus and superior temporal sulcus, medial and dorsolateral prefrontal cortices, mesial temporal cortices and cerebellum, as well as in Rolandic cortices and nucleus accumbens. These results indicate that the production of words in song is associated with activation of regions within right hemisphere areas that are not mirror-image homologues of left hemisphere perisylvian language areas, and suggest that multiple neural networks may be involved in different aspects of singing. Right hemisphere mechanisms may support the fluency-evoking effects of singing in neurological disorders such as stuttering or aphasia.
Article
In this study, a computer-based musical composition task was used to access higher order musical representations of adult nonmusicians, and patients having undergone left- or right-sided anterior temporal lobectomy (ATL). Compositions were rated according to the main features of cognitive structuralist models of pitch, tonality, rhythm, and phrase structure (musical grouping), as well as in terms of their nontonal complexity. The results showed that all participants created compositions that contained features typical of Western tonal music. The a priori music-theoretical structure of the cognitive models, however, was not reflected in the data. Laterality effects were observed for the left and right ATL patients in comparison to the adult nonmusicians. Left ATL patients showed less use of tonal features in their compositions, whilst right ATL patients placed less emphasis on phrase structure and melodic contour. The right ATL patients also showed impaired melodic discrimination in comparison to the normal controls.
Article
Musically experienced and inexperienced men and women discriminated among fundamental-frequency contours presented either binaurally (i.e., same contour to both ears) or dichotically (i.e., different contours to each ear). On two separate occasions, males made significantly fewer errors than did females in the binaural condition, but not in the dichotic condition. Subjects with prior musical experience were superior to musically naive subjects in both conditions. The dichotic pitch task produced a left-ear advantage, which was unrelated to gender or musical experience. The results suggest that the male advantage on the binaural task reflects a sex difference in the coordination of the two hemispheres during conjoint processing of the same stimuli rather than a difference in the direction or degree of hemispheric specialization for these stimuli.
Article
MNI coordinates determined using SPM2 and FSL/FLIRT with the ICBM-152 template were compared to Talairach coordinates determined using a landmark-based Talairach registration method (TAL). Analysis revealed a clear-cut bias in reference frames (origin, orientation) and scaling (brain size). Accordingly, ICBM-152 fitted brains were consistently larger, oriented more nose down, and translated slightly down relative to TAL fitted brains. Whole brain analysis of MNI/Talairach coordinate disparity revealed an ellipsoidal pattern with disparity ranging from zero at a point deep within the left hemisphere to greater than 1-cm for some anterior brain areas. MNI/Talairach coordinate disparity was generally less for brains fitted using FSL. The mni2tal transform generally reduced MNI/Talairach coordinate disparity for inferior brain areas but increased disparity for anterior, posterior, and superior areas. Coordinate disparity patterns differed for brain templates (MNI-305, ICBM-152) using the same fitting method (FSL/FLIRT) and for different fitting methods (SPM2, FSL/FLIRT) using the same template (ICBM-152). An MNI-to-Talairach (MTT) transform to correct for bias between MNI and Talairach coordinates was formulated using a best-fit analysis in one hundred high-resolution 3-D MR brain images. MTT transforms optimized for SPM2 and FSL were shown to reduced group mean MNI/Talairach coordinate disparity from a 5-13 mm to 1-2 mm for both deep and superficial brain sites. MTT transforms provide a validated means to convert MNI coordinates to Talairach compatible coordinates for studies using either SPM2 or FSL/FLIRT with the ICBM-152 template.
Article
This review brings together evidence from a diverse field of methods for investigating sex differences in language processing. Differences are found in certain language-related deficits, such as stuttering, dyslexia, autism and schizophrenia. Common to these is that language problems may follow from, rather than cause the deficit. Large studies have been conducted on sex differences in verbal abilities within the normal population, and a careful reading of the results suggests that differences in language proficiency do not exist. Early differences in language acquisition show a slight advantage for girls, but this gradually disappears. A difference in language lateralization of brain structure and function in adults has also been suggested, perhaps following size differences in the corpus callosum. Neither of these claims is substantiated by evidence. In addition, overall results from studies on regional grey matter distribution using voxel-based morphometry, indicate no consistent differences between males and females in language-related cortical regions. Language function in Wada tests, aphasia, and in normal ageing also fails to show sex differentiation.
Article
In this note, we revisit earlier work on false discovery rate (FDR) and evaluate it in relation to topological inference in statistical parametric mapping. We note that controlling the false discovery rate of voxels is not equivalent to controlling the false discovery rate of activations. This is a problem that is unique to inference on images, in which the underlying signal is continuous (i.e., signal which does not have a compact support). In brief, inference based on conventional voxel-wise FDR procedures is not appropriate for inferences on the topological features of a statistical parametric map (SPM), such as peaks or regions of activation. We describe the nature of the problem, illustrate it with some examples and suggest a simple solution based on controlling the false discovery rate of connected excursion sets within an SPM, characterised by their volume.
Article
A procedure for forming hierarchical groups of mutually exclusive subsets, each of which has members that are maximally similar with respect to specified characteristics, is suggested for use in large-scale (n > 100) studies when a precise optimal solution for a specified number of groups is not practical. Given n sets, this procedure permits their reduction to n − 1 mutually exclusive sets by considering the union of all possible n(n − 1)/2 pairs and selecting a union having a maximal value for the functional relation, or objective function, that reflects the criterion chosen by the investigator. By repeating this process until only one group remains, the complete hierarchical structure and a quantitative estimate of the loss associated with each stage in the grouping can be obtained. A general flowchart helpful in computer programming and a numerical example are included.
Article
Previous research suggests that left hemisphere specialisation for processing speech may specifically depend on rate-specific parameters, with rapidly successive or faster changing acoustic stimuli (e.g. stop consonant-vowel syllables) processed preferentially by the left hemisphere. The current study further investigates the involvement of the left hemisphere in processing rapidly changing auditory information, and examines the effects of sex on the organisation of this function. Twenty subjects participated in an auditory discrimination task involving the target identification of a two-tone sequence presented to one ear, paired with white noise to the contralateral ear. Analyses demonstrated a right ear advantage for males only at the shorter interstimulus interval durations (mean = 20 msec) whereas no ear advantage was observed for women. These results suggest that the male brain is more lateralised for the processing of rapidly presented auditory tones, specifically at shorter stimulus durations.
Article
We present a unified statistical theory for assessing the significance of apparent signal observed in noisy difference images. The results are usable in a wide range of applications, including fMRI, but are discussed with particular reference to PET images which represent changes in cerebral blood flow elicited by a specific cognitive or sensorimotor task. Our main result is an estimate of the P-value for local maxima of Gaussian, t, chi(2) and F fields over search regions of any shape or size in any number of dimensions. This unifies the P-values for large search areas in 2-D (Friston et al. [1991]: J Cereb Blood Flow Metab 11:690-699) large search regions in 3-D (Worsley et al. [1992]: J Cereb Blood Flow Metab 12:900-918) and the usual uncorrected P-value at a single pixel or voxel.
Article
Language lateralization based on functional magnetic resonance imaging (fMRI) is often used in clinical neurological settings. Currently, interpretation of the distribution, pattern and extent of language activation can be heavily dependent on the chosen statistical threshold. The aim of the present study was to 1) test the robustness of adaptive thresholding of fMRI data to yield a fixed number of active voxels, and to 2) develop a largely threshold-independent method of assessing when individual patients have statistically atypical language lateralization. Simulated data and real fMRI data in 34 healthy controls and 4 selected epilepsy patients performing a verbal fluency language fMRI task were used. Dependence of laterality on the thresholding method is demonstrated for simulated and real data. Simulated data were used to test the hypothesis that thresholding based upon a fixed number of active voxels would yield a laterality index that was more stable across a range of signal strengths (study power) compared to thresholding at a fixed p value. This stability allowed development of a method comparing an individual to a group of controls across a wide range of thresholds, providing a robust indication of atypical lateralization that is more objective than conventional methods. Thirty healthy controls were used as normative data for the threshold-independent method, and the remaining subjects were used as illustrative examples. The method could also be used more generally to assess relative regional distribution of activity in other neuroimaging paradigms (for example, one could apply it to the assessment of lateralization of activation in a memory task, or to the assessment of anterior-posterior distribution rather than laterality).
Article
This review first summarizes three functional magnetic resonance imaging studies conducted to elucidate the neural basis for interactions between the auditory and motor systems in the context of musical rhythm perception and production. The second part of the paper discusses these findings in the context of a proposed model for auditory-motor interactions that engage the posterior aspects of the superior temporal gyrus, and the ventral and dorsal premotor cortex. In the last section, we present outstanding issues that encompass topics, such as the role of auditory versus parietal cortex in sensorimotor integration, sensorimotor integration as an emergent property, the role of mirror neurons, and clinical applications.
Article
Why does music pervade our lives and those of all known human beings living today and in the recent past? Why do we feel compelled to engage in musical activity, or at least simply enjoy listening to music even if we choose not to actively participate? I argue that this is because musicality--communication using variations in pitch, rhythm, dynamics and timbre, by a combination of the voice, body (as in dance), and material culture--was essential to the lives of our pre-linguistic hominin ancestors. As a consequence we have inherited a desire to engage with music, even if this has no adaptive benefit for us today as a species whose communication system is dominated by spoken language. In this article I provide a summary of the arguments to support this view.
Article
In recent years, the demonstration that structural changes can occur in the human brain beyond those associated with development, ageing and neuropathology has revealed a new approach to studying the neural basis of behaviour. In this review paper, we focus on structural imaging studies of language that have utilised behavioural measures in order to investigate the neural correlates of language skills in the undamaged brain. We report studies that have used two different techniques: voxel-based morphometry of whole brain grey or white matter images and diffusion tensor imaging. At present, there are relatively few structural imaging studies of language. We group them into those that investigated (1) the perception of novel speech sounds, (2) the links between speech sounds and their meaning, (3) speech production, and (4) reading. We highlight the validity of the findings by comparing the results to those from functional imaging studies. Finally, we conclude by summarising the novel contribution of these studies to date and potential directions for future research.
Article
Traditionally, professional expertise has been judged by length of experience, reputation, and perceived mastery of knowledge and skill. Unfortunately, recent research demonstrates only a weak relationship between these indicators of expertise and actual, observed performance. In fact, observed performance does not necessarily correlate with greater professional experience. Expert performance can, however, be traced to active engagement in deliberate practice (DP), where training (often designed and arranged by their teachers and coaches) is focused on improving particular tasks. DP also involves the provision of immediate feedback, time for problem-solving and evaluation, and opportunities for repeated performance to refine behavior. In this article, we draw upon the principles of DP established in other domains, such as chess, music, typing, and sports to provide insight into developing expert performance in medicine.
Article
Twenty-four right-handed, right hemiparetic patients with Broca's aphasia were examined for their singing capacity. Twenty-one (87.5%) produced good melogy. Twelve of these (57%) produced good text words while singing. It is speculated that the right hemisphere is dominant over the left for singing capacity. The relationship between melodic and text singing was also discussed.
Article
Musically experienced and inexperienced men and women discriminated among fundamental-frequency contours presented either binaurally (i.e., same contour to both ears) or dichotically (i.e., different contours to each ear). On two separate occasions, males made significantly fewer errors than did females in the binaural condition, but not in the dichotic condition. Subjects with prior musical experience were superior to musically naive subjects in both conditions. The dichotic pitch task produced a left-ear advantage, which was unrelated to gender or musical experience. The results suggest that the male advantage on the binaural task reflects a sex difference in the coordination of the two hemispheres during conjoint processing of the same stimuli rather than a difference in the direction or degree of hemispheric specialization for these stimuli.
Article
Paramagnetic deoxyhemoglobin in venous blood is a naturally occurring contrast agent for magnetic resonance imaging (MRI). By accentuating the effects of this agent through the use of gradient-echo techniques in high fields, we demonstrate in vivo images of brain microvasculature with image contrast reflecting the blood oxygen level. This blood oxygenation level-dependent (BOLD) contrast follows blood oxygen changes induced by anesthetics, by insulin-induced hypoglycemia, and by inhaled gas mixtures that alter metabolic demand or blood flow. The results suggest that BOLD contrast can be used to provide in vivo real-time maps of blood oxygenation in the brain under normal physiological conditions. BOLD contrast adds an additional feature to magnetic resonance imaging and complements other techniques that are attempting to provide positron emission tomography-like measurements related to regional neural activity.
Article
Two groups of singers (n = 12,13) and a group of nonsingers (n = 12) each produced the national anthem by (1) speaking and (2) singing the words and by (3) humming the melody. Regional cerebral blood flow (rCBF) was measured at rest and during each phonation task from seven areas in each hemisphere by the 133Xe-inhalation method. Intrahemisphere, interhemisphere, and global rCBF were generally similar across phonation tasks and did not yield appreciable differences among the nonsingers and the singers.
Article
The need for a simply applied quantitative assessment of handedness is discussed and some previous forms reviewed. An inventory of 20 items with a set of instructions and response- and computational-conventions is proposed and the results obtained from a young adult population numbering some 1100 individuals are reported. The separate items are examined from the point of view of sex, cultural and socio-economic factors which might appertain to them and also of their inter-relationship to each other and to the measure computed from them all. Criteria derived from these considerations are then applied to eliminate 10 of the original 20 items and the results recomputed to provide frequency-distribution and cumulative frequency functions and a revised item-analysis. The difference of incidence of handedness between the sexes is discussed.
Article
Positron emission tomography was used to investigate differences in regional cerebral activity during word retrieval in response to different prompts. The contrast of semantic category fluency and initial letter fluency resulted in selective activation of left temporal regions; the reverse contrast yielded activation in left frontal regions (BA44/6). A further comparison between types of category fluency demonstrated a more anterior temporal response for natural kinds and more posterior activation for manipulable manmade objects. These results support behavioural data suggesting that category fluency is relatively more dependent on temporal-lobe regions, and initial letter fluency on frontal structures; and that categorical word retrieval is not a uniformly distributed function within the brain. This is compatible with the category-specific deficits observed after some focal lesions.
Article
We performed functional magnetic resonance imaging (MRI) in professional piano players and control subjects during an overtrained complex finger movement task using a blood oxygenation level dependent echo-planar gradient echo sequence. Activation clusters were seen in primary motor cortex, supplementary motor area, premotor cortex and superior parietal lobule. We found significant differences in the extent of cerebral activation between both groups with piano players having a smaller number of activated voxels. We conclude that, due to long-term motor practice a different cortical activation pattern can be visualized in piano players. For the same movements lesser neurons need to be recruited. The different volume of the activated ortical areas might therefore reflect the different effort necessary for motor performance in both groups.
Article
Cerebral blood flow (CBF) was measured with PET during rudimentary singing of a single pitch and vowel, contrasted to passive listening to complex tones. CBF increases in cortical areas related to motor control were seen in the supplementary motor area, anterior cingulate cortex, precentral gyri, anterior insula (and the adjacent inner face of the precentral operculum) and cerebellum, replicating most previously seen during speech. Increases in auditory cortex were seen within right Heschl's gyrus, and in the posterior superior temporal plane (and the immediately overlying parietal cortex). Since cortex near right Heschl's has been linked to complex pitch perception, its asymmetric activation here may be related to analyzing the fundamental frequency of one's own voice for feedback-guided modulation.
Article
Aside from spoken language, singing represents a second mode of acoustic (auditory-vocal) communication in humans. As a new aspect of brain lateralization, functional magnetic resonance imaging (fMRI) revealed two complementary cerebral networks subserving singing and speaking. Reproduction of a non-lyrical tune elicited activation predominantly in the right motor cortex, the right anterior insula, and the left cerebellum whereas the opposite response pattern emerged during a speech task. In contrast to the hemodynamic responses within motor cortex and cerebellum, activation of the intrasylvian cortex turned out to be bound to overt task performance. These findings corroborate the assumption that the left insula supports the coordination of speech articulation. Similarly, the right insula might mediate temporo-spatial control of vocal tract musculature during overt singing. Both speech and melody production require the integration of sound structure or tonal patterns, respectively, with a speaker's emotions and attitudes. Considering the widespread interconnections with premotor cortex and limbic structures, the insula is especially suited for this task.
Article
An automated coordinate-based system to retrieve brain labels from the 1988 Talairach Atlas, called the Talairach Daemon (TD), was previously introduced [Lancaster et al., 1997]. In the present study, the TD system and its 3-D database of labels for the 1988 Talairach atlas were tested for labeling of functional activation foci. TD system labels were compared with author-designated labels of activation coordinates from over 250 published functional brain-mapping studies and with manual atlas-derived labels from an expert group using a subset of these activation coordinates. Automated labeling by the TD system compared well with authors' labels, with a 70% or greater label match averaged over all locations. Author-label matching improved to greater than 90% within a search range of +/-5 mm for most sites. An adaptive grey matter (GM) range-search utility was evaluated using individual activations from the M1 mouth region (30 subjects, 52 sites). It provided an 87% label match to Brodmann area labels (BA 4 & BA 6) within a search range of +/-5 mm. Using the adaptive GM range search, the TD system's overall match with authors' labels (90%) was better than that of the expert group (80%). When used in concert with authors' deeper knowledge of an experiment, the TD system provides consistent and comprehensive labels for brain activation foci. Additional suggested applications of the TD system include interactive labeling, anatomical grouping of activation foci, lesion-deficit analysis, and neuroanatomy education.
Article
Hemodynamic responses were measured applying functional magnetic resonance imaging in two professional piano players and two carefully matched non-musician control subjects during the performance of self-paced bimanual and unimanual tapping tasks. The bimanual tasks were chosen because they resemble typical movements pianists have to generate during piano exercises. The results showed that the primary and secondary motor areas (M1, SMA, pre-SMA, and CMA) were considerably activated to a much lesser degree in professional pianists than in non-musicians. This difference was strongest for the pre-SMA and CMA, where professional pianists showed very little activation. The results suggest that the long lasting extensive hand skill training of the pianists leads to greater efficiency which is reflected in a smaller number of active neurons needed to perform given finger movements. This in turn enlarges the possible control capacity for a wide range of movements because more movements, or more 'degrees of freedom', are controllable.
Article
We used functional MRI to examine the functional anatomy of inner speech and different forms of auditory verbal imagery (imagining speech) in normal volunteers. We hypothesized that generating inner speech and auditory verbal imagery would be associated with left inferior frontal activation, and that generating auditory verbal imagery would involve additional activation in the lateral temporal cortices. Subjects were scanned, while performing inner speech and auditory verbal imagery tasks, using a 1.5 Tesla magnet. The generation of inner speech was associated with activation in the left inferior frontal/insula region, the left temporo-parietal cortex, right cerebellum and the supplementary motor area. Auditory verbal imagery in general, as indexed by the three imagery tasks combined, was associated with activation in the areas engaged during the inner speech task, plus the left precentral and superior temporal gyri (STG), and the right homologues of all these areas. These results are consistent with the use of the 'articulatory loop' during both inner speech and auditory verbal imagery, and the greater engagement of verbal self-monitoring during auditory verbal imagery.