Horwitz, B. et al. Activation of Broca's area during the production of spoken and signed language: a combined cytoarchitectonic mapping and PET analysis. Neuropsychologia 41, 1868-1876

Language Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bldg. 10, Rm. 6C420, MSC 1591, Bethesda, MD 20892, USA.
Neuropsychologia (Impact Factor: 3.3). 02/2003; 41(14):1868-76. DOI: 10.1016/S0028-3932(03)00125-8
Source: PubMed


Broca's area in the inferior frontal gyrus consists of two cytoarchitectonically defined regions-Brodmann areas (BA) 44 and 45. Combining probabilistic maps of these two areas with functional neuroimaging data obtained using PET, it is shown that BA45, not BA44, is activated by both speech and signing during the production of language narratives in bilingual subjects fluent from early childhood in both American Sign Language (ASL) and English when the generation of complex movements and sounds is taken into account. It is BA44, not BA45, that is activated by the generation of complex articulatory movements of oral/laryngeal or limb musculature. The same patterns of activation are found for oral language production in a group of English speaking monolingual subjects. These findings implicate BA45 as the part of Broca's area that is fundamental to the modality-independent aspects of language generation.

Download full-text


Available from: Katrin Amunts,
132 Reads
  • Source
    • "Consistent with the Braun et al. (2001) results, both sign and speech engaged the left inferior frontal gyrus (Broca's area) indicating a modality-independent role for this region in lexical production. Using probabilistic cytoarchitectonic mapping and data from the Braun et al. (2001) study, Horwitz et al. (2003) reported that BA 45 was engaged during both speaking and signing, but there was no involvement of BA 44, compared to the motor baseline conditions. In addition, there was extensive activation in BA 44, but not in BA 45, for the non-linguistic oral and manual control tasks compared to rest. "
    [Show abstract] [Hide abstract]
    ABSTRACT: To investigate the impact of sensory-motor systems on the neural organization for language, we conducted an H215O-PET study of sign and spoken word production (picture-naming) and an fMRI study of sign and audio-visual spoken language comprehension (detection of a semantically anomalous sentence) with hearing bilinguals who are native users of American Sign Language (ASL) and English. Directly contrasting speech and sign production revealed greater activation in bilateral parietal cortex for signing, while speaking resulted in greater activation in bilateral superior temporal cortex (STC) and right frontal cortex, likely reflecting auditory feedback control. Surprisingly, the language production contrast revealed a relative increase in activation in bilateral occipital cortex for speaking. We speculate that greater activation in visual cortex for speaking may actually reflect cortical attenuation when signing, which functions to distinguish self-produced from externally generated visual input. Directly contrasting speech and sign comprehension revealed greater activation in bilateral STC for speech and greater activation in bilateral occipital-temporal cortex for sign. Sign comprehension, like sign production, engaged bilateral parietal cortex to a greater extent than spoken language. We hypothesize that posterior parietal activation in part reflects processing related to spatial classifier constructions in ASL and that anterior parietal activation may reflect covert imitation that functions as a predictive model during sign comprehension. The conjunction analysis for comprehension revealed that both speech and sign bilaterally engaged the inferior frontal gyrus (with more extensive activation on the left) and the superior temporal sulcus, suggesting an invariant bilateral perisylvian language system. We conclude that surface level differences between sign and spoken languages should not be dismissed and are critical for understanding the neurobiology of language.
    Frontiers in Psychology 05/2014; 5:484. DOI:10.3389/fpsyg.2014.00484 · 2.80 Impact Factor
  • Source
    • "Recent work utilizing cytoarchitectonic probability maps [72] suggests that area 45 supports lexical selection processes whereas area 44 is more involved in lexical access via the segmental route to reading. A number of studies of signed language processing have reliably found activation in the left IFG which further speaks to the modality independence of Broca’s region [36], [73], [74]. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Individuals with significant hearing loss often fail to attain competency in reading orthographic scripts which encode the sound properties of spoken language. Nevertheless, some profoundly deaf individuals do learn to read at age-appropriate levels. The question of what differentiates proficient deaf readers from less-proficient readers is poorly understood but topical, as efforts to develop appropriate and effective interventions are needed. This study uses functional magnetic resonance imaging (fMRI) to examine brain activation in deaf readers (N = 21), comparing proficient (N = 11) and less proficient (N = 10) readers' performance in a widely used test of implicit reading. Proficient deaf readers activated left inferior frontal gyrus and left middle and superior temporal gyrus in a pattern that is consistent with regions reported in hearing readers. In contrast, the less-proficient readers exhibited a pattern of response characterized by inferior and middle frontal lobe activation (right>left) which bears some similarity to areas reported in studies of logographic reading, raising the possibility that these individuals are using a qualitatively different mode of orthographic processing than is traditionally observed in hearing individuals reading sound-based scripts. The evaluation of proficient and less-proficient readers points to different modes of processing printed English words. Importantly, these preliminary findings allow us to begin to establish the impact of linguistic and educational factors on the neural systems that underlie reading achievement in profoundly deaf individuals.
    PLoS ONE 01/2013; 8(1):e54696. DOI:10.1371/journal.pone.0054696 · 3.23 Impact Factor
  • Source
    • "FMRI studies have shown 'activation' of certain brain areas involved in language processing (e.g. Osterhout 1997; Hagoort et al. 1999; Embick et al. 2000; Horwitz et al. 2003; Pulvermüller & Assadollahi 2007), with different levels of language processing identified in specific regions, "
    [Show abstract] [Hide abstract]
    ABSTRACT: A few decades of work in the AI field have focused efforts on developing a new generation of systems which can acquire knowledge via interaction with the world. Yet, until very recently, most such attempts were underpinned by research which predominantly regarded linguistic phenomena as separated from the brain and body. This could lead one into believing that to emulate linguistic behaviour, it suffices to develop ‘software’ operating on abstract representations that will work on any computational machine. This picture is inaccurate for several reasons, which are elucidated in this paper and extend beyond sensorimotor and semantic resonance. Beginning with a review of research, I list several heterogeneous arguments against disembodied language, in an attempt to draw conclusions for developing embodied multisensory agents which communicate verbally and non-verbally with their environment. Without taking into account both the architecture of the human brain, and embodiment, it is unrealistic to replicate accurately the processes which take place during language acquisition, comprehension, production, or during non-linguistic actions. While robots are far from isomorphic with humans, they could benefit from strengthened associative connections in the optimization of their processes and their reactivity and sensitivity to environmental stimuli, and in situated human-machine interaction. The concept of multisensory integration should be extended to cover linguistic input and the complementary information combined from temporally coincident sensory impressions.
    Linguistic and Cognitive Approaches To Dialogue Agents; 07/2012
Show more