Cracking the language code: Neural mechanisms underlying speech parsing

Ahmanson-Lovelace Brain Mapping Center, Semel Institute for Neuroscience and Human Behavior, Los Angeles, California 90095, USA.
The Journal of Neuroscience : The Official Journal of the Society for Neuroscience (Impact Factor: 6.75). 08/2006; 26(29):7629-39. DOI: 10.1523/JNEUROSCI.5501-05.2006
Source: PubMed

ABSTRACT Word segmentation, detecting word boundaries in continuous speech, is a critical aspect of language learning. Previous research in infants and adults demonstrated that a stream of speech can be readily segmented based solely on the statistical and speech cues afforded by the input. Using functional magnetic resonance imaging (fMRI), the neural substrate of word segmentation was examined on-line as participants listened to three streams of concatenated syllables, containing either statistical regularities alone, statistical regularities and speech cues, or no cues. Despite the participants' inability to explicitly detect differences between the speech streams, neural activity differed significantly across conditions, with left-lateralized signal increases in temporal cortices observed only when participants listened to streams containing statistical regularities, particularly the stream containing speech cues. In a second fMRI study, designed to verify that word segmentation had implicitly taken place, participants listened to trisyllabic combinations that occurred with different frequencies in the streams of speech they just heard ("words," 45 times; "partwords," 15 times; "nonwords," once). Reliably greater activity in left inferior and middle frontal gyri was observed when comparing words with partwords and, to a lesser extent, when comparing partwords with nonwords. Activity in these regions, taken to index the implicit detection of word boundaries, was positively correlated with participants' rapid auditory processing skills. These findings provide a neural signature of on-line word segmentation in the mature brain and an initial model with which to study developmental changes in the neural architecture involved in processing speech cues during language learning.

  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The mature brain is organized into distinct neural networks defined by regions demonstrating correlated activity during task performance as well as rest. While research has begun to examine differences in these networks between children and adults, little is known about developmental changes during early adolescence. Using functional magnetic resonance imaging (fMRI), we examined the Default Mode Network (DMN) and the Central Executive Network (CEN) at ages 10 and 13 in a longitudinal sample of 45 participants. In the DMN, participants showed increasing integration (i.e., stronger within-network correlations) between the posterior cingulate cortex (PCC) and the medial prefrontal cortex. During this time frame participants also showed increased segregation (i.e., weaker between-network correlations) between the PCC and the CEN. Similarly, from age 10 to 13, participants showed increased connectivity between the dorsolateral prefrontal cortex and other CEN nodes, as well as increasing DMN segregation. IQ was significantly positively related to CEN integration at age 10, and between-network segregation at both ages. These findings highlight early adolescence as a period of significant maturation for the brain's functional architecture and demonstrate the utility of longitudinal designs to investigate neural network development.
    Developmental Cognitive Neuroscience 08/2014; DOI:10.1016/j.dcn.2014.08.002 · 3.71 Impact Factor
  • [Show abstract] [Hide abstract]
    ABSTRACT: Auditory neurophysiology has demonstrated how basic acoustic features are mapped in the brain, but it is still not clear how multiple sound components are integrated over time and recognized as an object. We investigated the role of statistical learning in encoding the sequential features of complex sounds by recording neuronal responses bilaterally in the auditory forebrain of awake songbirds that were passively exposed to long sound streams. These streams contained sequential regularities, and were similar to streams used in human infants to demonstrate statistical learning for speech sounds. For stimulus patterns with contiguous transitions and with nonadjacent elements, single and multiunit responses reflected neuronal discrimination of the familiar patterns from novel patterns. In addition, discrimination of nonadjacent patterns was stronger in the right hemisphere than in the left, and may reflect an effect of top-down modulation that is lateralized. Responses to recurring patterns showed stimulus-specific adaptation, a sparsening of neural activity that may contribute to encoding invariants in the sound stream and that appears to increase coding efficiency for the familiar stimuli across the population of neurons recorded. As auditory information about the world must be received serially over time, recognition of complex auditory objects may depend on this type of mnemonic process to create and differentiate representations of recently heard sounds.
    Proceedings of the National Academy of Sciences 09/2014; 111(40). DOI:10.1073/pnas.1412109111 · 9.81 Impact Factor
  • Source