Hemispheric roles in the perception of speech prosody

Department of Audiology and Speech Sciences, Purdue University, West Lafayette, IN 47907-2038, USA.
NeuroImage (Impact Factor: 6.36). 10/2004; 23(1):344-57. DOI: 10.1016/j.neuroimage.2004.06.004
Source: PubMed

ABSTRACT Speech prosody is processed in neither a single region nor a specific hemisphere, but engages multiple areas comprising a large-scale spatially distributed network in both hemispheres. It remains to be elucidated whether hemispheric lateralization is based on higher-level prosodic representations or lower-level encoding of acoustic cues, or both. A cross-language (Chinese; English) fMRI study was conducted to examine brain activity elicited by selective attention to Chinese intonation (I) and tone (T) presented in three-syllable (I3, T3) and one-syllable (I1, T1) utterance pairs in a speeded response, discrimination paradigm. The Chinese group exhibited greater activity than the English in a left inferior parietal region across tasks (I1, I3, T1, T3). Only the Chinese group exhibited a leftward asymmetry in inferior parietal and posterior superior temporal (I1, I3, T1, T3), anterior temporal (I1, I3, T1, T3), and frontopolar (I1, I3) regions. Both language groups shared a rightward asymmetry in the mid portions of the superior temporal sulcus and middle frontal gyrus irrespective of prosodic unit or temporal interval. Hemispheric laterality effects enable us to distinguish brain activity associated with higher-order prosodic representations in the Chinese group from that associated with lower-level acoustic/auditory processes that are shared among listeners regardless of language experience. Lateralization is influenced by language experience that shapes the internal prosodic representation of an external auditory signal. We propose that speech prosody perception is mediated primarily by the RH, but is left-lateralized to task-dependent regions when language processing is required beyond the auditory analysis of the complex sound.

Download full-text


Available from: M. Dzemidzic, Mar 02, 2014
  • Source
    • "This is also in line with behavioral and imaging studies on perceptual detection of words, which show LH superiority for verbal processing and for rapid information processes (Gandour et al., 2003; Nicholls, 1996; Schirmer & Kotz, 2006; Vouloumanos, Kiehl, Werker, & Liddle, 2001; Zatorre & Belin, 2001). Specifically, the ERP data revealed greater amplitude activation over the left than the right anterior and posterior regions during early ERP processing (150–170 ms) and greater activation over the left than the right posterior region during later ERP processing (240–260 ms).The fact that LH activation was observed for words in both early and late processing stages also supports the possibility that LH involvement is more pronounced in the perception of prosodic units at the syllable or word level (Erhan et al., 1998; Gandour et al., 2004; Sandmann et al., 2007). However, there was greater amplitude activation over the right than the left anterior and central regions during later ERP processing. "
    [Show abstract] [Hide abstract]
    ABSTRACT: This study examined the effect of sad prosody on hemispheric specialization for word processing using behavioral and electrophysiological measures. A dichotic listening task combining focused attention and signal-detection methods was conducted to evaluate the detection of a word spoken in neutral or sad prosody. An overall right ear advantage together with leftward lateralization in early (150-170ms) and late (240-260ms) processing stages was found for word detection, regardless of prosody. Furthermore, the early stage was most pronounced for words spoken in neutral prosody, showing greater negative activation over the left than the right hemisphere. In contrast, the later stage was most pronounced for words spoken with sad prosody, showing greater positive activation over the left than the right hemisphere. The findings suggest that sad prosody alone was not sufficient to modulate hemispheric asymmetry in word-level processing. We posit that lateralized effects of sad prosody on word processing are largely dependent on the psychoacoustic features of the stimuli as well as on task demands. Copyright © 2015 Elsevier Inc. All rights reserved.
    Brain and Cognition 04/2015; 96:28-37. DOI:10.1016/j.bandc.2015.03.002 · 2.68 Impact Factor
  • Source
    • " advanced stage may involve distinct underlying processes from earlier stages , and therefore may show a different relationship with brain structure . The importance of left - hemisphere engagement in fluent Mandarin has been implicated by the findings that native Manda - rin speakers show left - lateralized functional representation of language ( Gandour et al . , 2004 ; Hsieh , Gandour , Wong , & Hutchins , 2001 ; Klein , Zatorre , Milner , & Zhao , 2001 ; Tong et al . , 2005 ; Xu et al . , 2006 ) . Thus , left - hemisphere networks and tracts may become increasingly important as learners ' proficiency improves . More studies are necessary to unveil the complete picture of the neurobiology of second "
    [Show abstract] [Hide abstract]
    ABSTRACT: Second language learning becomes increasingly difficult with age, but some adults learn more successfully than others. We examined whether inter-subject variability in the microstructure of white matter pathways, as measured by diffusion tensor imaging (DTI), would predict native English speakers' outcomes in learning Mandarin Chinese. Twenty-one adults were scanned before participating in an intensive 4-week Mandarin course. At the end of the Mandarin course, participants completed a final exam that assessed their skills in both spoken and written Mandarin. Individual participants' white-matter tracts were reconstructed from their native DTI data and related to final-exam performance. Superior language learning was correlated with DTI measures in the right hemisphere, but not in the left hemisphere. In particular, greater initial fractional anisotropy (FA) in both the right superior longitudinal fasciculus (parietal bundle) and the right inferior longitudinal fasciculus was associated with more successful Mandarin learning. The relation between white-matter structure in the right hemisphere of native English speakers and successful initial language learning may reflect the tonal and visuo-spatial properties, respectively, of spoken and written Mandarin Chinese.
    Journal of Neurolinguistics 09/2014; 33. DOI:10.1016/j.jneuroling.2014.08.004 · 1.60 Impact Factor
  • Source
    • "Pitch processing has a right hemisphere advantage, as shown in behavior experiments such as DL (Sidtis, 1981), PET studies (Zatorre and Belin, 2001), and later fMRI studies (Boemio et al., 2005; Jamison et al., 2006); for review, see Zatorre et al. (2002). By contrast, compared to non-tonal language speakers, tonal language speakers have greater left hemisphere activities during lexical tone perception (Gandour et al., 1998, 2004; Hsieh et al., 2001; Wang et al., 2004), and multiple brain regions in the left hemisphere were believed to be the primary source of N400 (Lau et al., 2008). Noting these, we also adopted the lateralization pattern in our study to investigate top-down and bottom-up processing of lexical tones. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Speech perception entails both top-down processing that relies primarily on language experience and bottom-up processing that depends mainly on instant auditory input. Previous models of speech perception often claim that bottom-up processing occurs in an early time window, whereas top-down processing takes place in a late time window after stimulus onset. In this paper, we evaluated the temporal relation of both types of processing in lexical tone perception. We conducted a series of event-related potential (ERP) experiments that recruited Mandarin participants and adopted three experimental paradigms, namely dichotic listening, lexical decision with phonological priming, and semantic violation. By systematically analyzing the lateralization patterns of the early and late ERP components that are observed in these experiments, we discovered that: auditory processing of pitch variations in tones, as a bottom-up effect, elicited greater right hemisphere activation; in contrast, linguistic processing of lexical tones, as a top-down effect, elicited greater left hemisphere activation. We also found that both types of processing co-occurred in both the early (around 200 ms) and late (around 300-500 ms) time windows, which supported a parallel model of lexical tone perception. Unlike the previous view that language processing is special and performed by dedicated neural circuitry, our study have elucidated that language processing can be decomposed into general cognitive functions (e.g., sensory and memory) and share neural resources with these functions.
    Frontiers in Behavioral Neuroscience 03/2014; 8:97. DOI:10.3389/fnbeh.2014.00097 · 4.16 Impact Factor
Show more