Neural substrates of phonemic perception

Department of Neurology, Medical College of Wisconsin, Milwaukee, WI 53226, USA.
Cerebral Cortex (Impact Factor: 8.31). 11/2005; 15(10):1621-31. DOI: 10.1093/cercor/bhi040
Source: PubMed

ABSTRACT The temporal lobe in the left hemisphere has long been implicated in the perception of speech sounds. Little is known, however, regarding the specific function of different temporal regions in the analysis of the speech signal. Here we show that an area extending along the left middle and anterior superior temporal sulcus (STS) is more responsive to familiar consonant-vowel syllables during an auditory discrimination task than to comparably complex auditory patterns that cannot be associated with learned phonemic categories. In contrast, areas in the dorsal superior temporal gyrus bilaterally, closer to primary auditory cortex, are activated to the same extent by the phonemic and nonphonemic sounds. Thus, the left middle/anterior STS appears to play a role in phonemic perception. It may represent an intermediate stage of processing in a functional pathway linking areas in the bilateral dorsal superior temporal gyrus, presumably involved in the analysis of physical features of speech and other complex non-speech sounds, to areas in the left anterior STS and middle temporal gyrus that are engaged in higher-level linguistic processes.

  • Source
    • "All three groups of participants showed adaptation to the repetition of lyrics or melodies in the bilateral STG and STS, but in both patient groups, these effects were markedly smaller in spatial extent when compared to healthy controls. Notably, patients with left (but not right) hippocampal sclerosis exhibited significantly decreased adaptation to lyrics in the left STS, which is known to play a role in phonemic processing and also known to be crucial for the perception of a sound as speech (Dehaene- Lambertz et al., 2005; Liebenthal, 2005; Möttönen et al., 2006; for a review on STS, see Hein and Knight, 2008). This finding is most likely tied to the role of the left medial temporal lobe in verbal processing (Meyer et al., 2005; Wagner et al., 2008; Greve et al., 2011) and may reflect the perturbed build-up of memory traces for lyrics (and verbal material in general) due to disrupted feedback connections between medial and lateral structures of the left temporal lobe (Eichenbaum, 2000). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Songs constitute a natural combination of lyrics and melodies, but it is unclear whether and how these two song components are integrated during the emergence of a memory trace. Network theories of memory suggest a prominent role of the hippocampus, together with unimodal sensory areas, in the build-up of conjunctive representations. The present study tested the modulatory influence of the hippocampus on neural adaptation to songs in lateral temporal areas. Patients with unilateral hippocampal sclerosis and healthy matched controls were presented with blocks of short songs in which lyrics and/or melodies were varied or repeated in a crossed factorial design. Neural adaptation effects were taken as correlates of incidental emergent memory traces. We hypothesized that hippocampal lesions, particularly in the left hemisphere, would weaken adaptation effects, especially the integration of lyrics and melodies. Results revealed that lateral temporal lobe regions showed weaker adaptation to repeated lyrics as well as a reduced interaction of the adaptation effects for lyrics and melodies in patients with left hippocampal sclerosis. This suggests a deficient build-up of a sensory memory trace for lyrics and a reduced integration of lyrics with melodies, compared to healthy controls. Patients with right hippocampal sclerosis showed a similar profile of results although the effects did not reach significance in this population. We highlight the finding that the integrated representation of lyrics and melodies typically shown in healthy participants is likely tied to the integrity of the left medial temporal lobe. This novel finding provides the first neuroimaging evidence for the role of the hippocampus during repetitive exposure to lyrics and melodies and their integration into a song.
    Frontiers in Human Neuroscience 01/2014; 8:111. DOI:10.3389/fnhum.2014.00111 · 2.90 Impact Factor
  • Source
    • "For example, Golestani and Zatorre (2004) trained monolingual English speakers to identify Hindi speech sounds as belonging to either dental or retroflex phonetic categories, a phonetic distinction that is not used in English. After only 5 h of training, results showed significant behavioral improvements and functional changes within cortical areas that are used during the classification of native language speech sounds, including within the left superior temporal gyrus (an area associated with phonemic perception; Liebenthal et al., 2005), the left inferior frontal gyrus, and the left caudate nucleus (areas associated with speech articulation; Hickok and Poeppel, 2007). Correlations between degree of success in learning to identify the contrasting phonetic units and changes in neural activity were also observed. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Sensitive periods in human development have often been proposed to explain age-related differences in the attainment of a number of skills, such as a second language (L2) and musical expertise. It is difficult to reconcile the negative consequence this traditional view entails for learning after a sensitive period with our current understanding of the brain's ability for experience-dependent plasticity across the lifespan. What is needed is a better understanding of the mechanisms underlying auditory learning and plasticity at different points in development. Drawing on research in language development and music training, this review examines not only what we learn and when we learn it, but also how learning occurs at different ages. First, we discuss differences in the mechanism of learning and plasticity during and after a sensitive period by examining how language exposure versus training forms language-specific phonetic representations in infants and adult L2 learners, respectively. Second, we examine the impact of musical training that begins at different ages on behavioral and neural indices of auditory and motor processing as well as sensorimotor integration. Third, we examine the extent to which childhood training in one auditory domain can enhance processing in another domain via the transfer of learning between shared neuro-cognitive systems. Specifically, we review evidence for a potential bi-directional transfer of skills between music and language by examining how speaking a tonal language may enhance music processing and, conversely, how early music training can enhance language processing. We conclude with a discussion of the role of attention in auditory learning for learning during and after sensitive periods and outline avenues of future research.
    Frontiers in Systems Neuroscience 11/2013; 7:90. DOI:10.3389/fnsys.2013.00090
  • Source
    • "Davis and Johnsrude suggest this interaction between speech perception and speech production networks ensure that speech is perceived categorically (Davis and Johnsrude, 2007). Generally, categorical perception is an important aspect in speech perception, as it ensures that speech related acoustic signals are not perceived as an acoustic continuum but rather as clearly separable phonetic information (Davis and Johnsrude, 2007; Liebenthal, 2005). In this respect, the observed premotor involvement in the study by Osnes et al. (2011b) may indicate such a categorical perception, facilitating the perception of a distorted sound as speech sound. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues.
    Hearing research 10/2013; 307. DOI:10.1016/j.heares.2013.09.011 · 2.85 Impact Factor

Preview (2 Sources)

Available from