Article

Infant speech perception activates Broca's area: A developmental magnetoencephalography study

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Discriminative responses to tones, harmonics, and syllables in the left hemisphere were measured with magnetoencephalography in neonates, 6-month-old infants, and 12-month-old infants using the oddball paradigm. Real-time head position tracking, signal space separation, and head position standardization were applied to secure quality data for source localization. Minimum current estimates were calculated to characterize infants' cortical activities for detecting sound changes. The activation patterns observed in the superior temporal and inferior frontal regions provide initial evidence for the developmental emergence early in life of a perceptual-motor link for speech perception that may depend on experience.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... (Belin et al., 2000;Blasi et al., 2011;Grossmann et al., 2010). Imada et al. (2006) examined 6-month-old and 12-month-old infants while they listened to nonspeech, harmonics, and syllables using the MEG (magnetoencephalography). Dehaene-Lambertz et al. (2006) also studied 3-month-old infants as they listened to t sentences. ...
... In response to the speech and the auditory and motor cues, synchronised activities were reported at 6 and 12 months of age (Imada et al., 2006). The newborn did not exhibit any activation in the motor speech area of the brain as a response to auditory speech when brain activity was recorded in auditory and motor brain areas. ...
... The newborn did not exhibit any activation in the motor speech area of the brain as a response to auditory speech when brain activity was recorded in auditory and motor brain areas. However, in 6-12-month-old infants, activity in the auditory and motor brain regions became more synchronised (Imada et al., 2006). In both experiments, the inferior frontal and Broca's area, which participate in speech production, showed activation in response to auditorily presented speech. ...
Article
Full-text available
Speech might be one of the best inventions of human beings due to its critical communicative role in individuals' daily lives. Hence any study about it is valuable. To our knowledge, merely three studies focused on brain regions' associations with speech production were published more than eighteen years ago; furthermore, research on the brain areas associated with speech production is currently insufficient. The present review aims to provide information about all brain areas contributing to speech production to update the knowledge of brain areas related to speech production. The current study confirms earlier claims about activating some brain areas in the process; however, the previous studies were not comprehensive, and not all brain areas were mentioned. Three cerebral lobes are involved in the process, namely, the frontal, parietal and temporal lobes. The regions involved include the left superior parietal lobe, Wernicke's area, Heschl's gyri, primary auditory cortex, left posterior superior temporal gyrus (pSTG), Broca's area, and premotor cortex. In addition, regions of the lateral sulcus (anterior insula and posterior superior temporal sulcus), basal ganglia (putamen), and forebrain (thalamus) showed participation in the process. However, there was a different brain activation of overt and covert or silent speech (Broca's and Wernicke's areas). Moreover, mouth position and breathing style showed a difference in speech mechanism. In terms of speech development, the early postnatal years are important for speech development, as well as identifying three crucial stages of speech development: the pre-verbal stage, transition to active speech, and refinement of speech. In addition, during the early years of speech development, auditory and motor brain regions showed involvement in the process.
... The study reported here investigates the possibility that the onset of the production of canonical syllables may serve as an estimate of infants' speech perception abilities. a Speech production skills can be reliably assessed at very young ages (Ertmer & Jung, 2012b;Ertmer et al., 2007;Nathani et al., 2006;Oller & Eilers, 1988;Walker & Bass-Ringdahl, 2008), and as reviewed below, perception and production skills have been found to be closely coupled in children with normal hearing (NH;Bruderder et al., 2015;D'Ausilio et al., 2012;DePaolis et al., 2011;Imada et al., 2006;Skipper et al., 2017), raising the possibility that assessment of early speech productions skills may serve as an estimate of speech perception and a predictor for later language outcomes (Walker & Bass-Ringdahl, 2008). However, little is known about the relationship between speech perception and speech production skills in children with CIs. ...
... Many studies suggest that speech perception and speech production are integrally related with each other (Bruderer et al., 2015;D'Ausilio et al., 2012;DePaolis et al., 2011;Fernald et al., 2006;Imada et al., 2006;Kuhl et al., 2005;Marchman & Fernald, 2008;McCathren et al., 1999;McGillion et al., 2016;Pulvermüller et al., 2006;Skipper et al., 2017;Vihman et al., 2014;Whitehurst et al., 1991). One study by Imada et al. (2006) found that, when 6-and 12-month-olds with NH heard speech sounds, they showed a coupled activation in the auditory cortex and motor cortex (i.e., speech production areas). ...
... Many studies suggest that speech perception and speech production are integrally related with each other (Bruderer et al., 2015;D'Ausilio et al., 2012;DePaolis et al., 2011;Fernald et al., 2006;Imada et al., 2006;Kuhl et al., 2005;Marchman & Fernald, 2008;McCathren et al., 1999;McGillion et al., 2016;Pulvermüller et al., 2006;Skipper et al., 2017;Vihman et al., 2014;Whitehurst et al., 1991). One study by Imada et al. (2006) found that, when 6-and 12-month-olds with NH heard speech sounds, they showed a coupled activation in the auditory cortex and motor cortex (i.e., speech production areas). This coupled activation was not found either in newborns or with nonspeech stimuli, suggesting that it was not innate but instead acquired through the experience of hearing speech. ...
Article
Purpose: The study sought to determine whether the onset of canonical vocalizations in children with cochlear implants (CIs) is related to speech perception skills and spoken vocabulary size at 24 months postactivation. Method: The vocal development in 13 young CI recipients (implanted by their third birthdays; mean age at activation = 20.62 months, SD = 8.92 months) was examined at every 3-month interval during the first 2 years of CI use. All children were enrolled in auditory-oral intervention programs. Families of these children used spoken English only. To determine the onset of canonical syllables, the first 50 utterances from 20-min adult-child interactions were analyzed during each session. The onset timing was determined when at least 20% of utterances included canonical syllables. As children's outcomes, we examined their Lexical Neighborhood Test scores and vocabulary size at 24 months postactivation. Results: Pearson correlation analysis showed that the onset timing of canonical syllables is significantly correlated with phonemic recognition skills and spoken vocabulary size at 24 months postactivation. Regression analyses also indicated that the onset timing of canonical syllables predicted phonemic recognition skills and spoken vocabulary size at 24 months postactivation. Conclusion: Monitoring vocal advancement during the earliest periods following cochlear implantation could be valuable as an early indicator of auditory-driven language development in young children with CIs. It remains to be studied which factors improve vocal development for young CI recipients.
... Infant speech perception experiments have focused on the ability to discriminate more sophisticated sound categories with distinction in temporal structure, phonemic category, or prosody at different ages. Among the ten selected infant speech processing studies (one with music intervention), over half of them incorporated source localization analysis to verify their hypotheses [47,65,67,70,72]. Unlike adult MEG studies in which individual subjects' MRIs are generally available for source modeling analysis, most of the infant studies used a representative age-appropriate infant head template with spherical head modeling for source estimation. ...
... A range of different inverse models were adopted in these studies, including ECD, minimum norm estimate (MNE), standardized low-resolution brain electromagnetic tomography (sLORETA), and dynamic statistical parametric mapping (dSPM), to estimate the locations of the neural generators of the targeted cognitive tasks based on the constraints of the forward models. By incorporating MEG source localization techniques, infant speech studies could elucidate the neuroanatomical underpinnings of cross-language speech categorization [70,72], motor theory in infants [67], and even testing infant's semantic processing [48] and how music training benefits later speech processing [47] with both temporal and spatial characterizations that other neural imaging tools may not be able to provide. The maturation of MEG measurement, a non-invasive and zero-noise source imaging tool for infants, will add more perspectives to the speech development theories building on infants' behavioral responses. ...
... A more sophisticated way is to take different types of physiological structures into account by using BEM or other realistic head methods for forward modeling [1]. Later studies using templates from a series of age-matched infant MRIs to identify the sources of the cognitive processing have also been shown to be effective when individual infant MRIs were not available [67]. The advancements of the forward modeling methods better configure the parameters for the next source analysis step-the inverse solution. ...
Article
Full-text available
Magnetoencephalography (MEG) is known for its temporal precision and good spatial resolution in cognitive brain research. Nonetheless, it is still rarely used in developmental research, and its role in developmental cognitive neuroscience is not adequately addressed. The current review focuses on the source analysis of MEG measurement and its potential to answer critical questions on neural activation origins and patterns underlying infants' early cognitive experience. The advantages of MEG source localization are discussed in comparison with functional magnetic resonance imaging (fMRI) and functional near-infrared spectroscopy (fNIRS), two leading imaging tools for studying cognition across age. Challenges of the current MEG experimental protocols are highlighted, including measurement and data processing, which could potentially be resolved by developing and improving both software and hardware. A selection of infant MEG research in auditory, speech, vision, motor, sleep, cross-modality, and clinical application is then summarized and discussed with a focus on the source localization analyses. Based on the literature review and the advancements of the infant MEG systems and source analysis software, typical practices of infant MEG data collection and analysis are summarized as the basis for future developmental cognitive research.
... In support of this view, EEG modelling approaches of P2 responses evoked by vowels and tones have proposed cortical sources in the superior temporal, supra-marginal and inferior frontal gyri in two-month-old infants (Basirat, Dehaene, & Dehaene-Lambertz, 2014;Bristow et al., 2009), or in the anterior cingulate and temporal regions for 6-month-olds (Ortiz-Mantilla, Hämäläinen, & Benasich, 2012). Imada et al. (2006) also reported superior temporal and inferior frontal sources using MEG in 6 and 12 month-olds (Imada et al., 2006). These studies hypothesized that inferior frontal regions are involved in syllable perception, which might explain the relationship we observed between the latency of P2 contralateral responses and the IFG microstructure. ...
... In support of this view, EEG modelling approaches of P2 responses evoked by vowels and tones have proposed cortical sources in the superior temporal, supra-marginal and inferior frontal gyri in two-month-old infants (Basirat, Dehaene, & Dehaene-Lambertz, 2014;Bristow et al., 2009), or in the anterior cingulate and temporal regions for 6-month-olds (Ortiz-Mantilla, Hämäläinen, & Benasich, 2012). Imada et al. (2006) also reported superior temporal and inferior frontal sources using MEG in 6 and 12 month-olds (Imada et al., 2006). These studies hypothesized that inferior frontal regions are involved in syllable perception, which might explain the relationship we observed between the latency of P2 contralateral responses and the IFG microstructure. ...
Preprint
Full-text available
Brain development incorporates several intermingled mechanisms throughout infancy leading to intense and asynchronous maturation across cerebral networks and functional modalities. Combining electroencephalography (EEG) and diffusion magnetic resonance imaging (MRI), previous studies in the visual modality showed that the functional maturation of the event-related potentials (ERP) during the first postnatal semester relates to structural changes in the corresponding white matter pathways. Here we aimed to investigate similar issues in the auditory modality. We measured ERPs to syllables in 1-to 6-month-old infants and analyzed them in relation with the maturational properties of underlying neural substrates measured with diffusion tensor imaging (DTI). We first observed a decrease in the latency of the auditory P2, and a decrease of diffusivities in the auditory tracts and perisylvian regions with age. Secondly, we highlighted some of the early functional and structural substrates of lateralization. Contralateral responses to monoaural syllables were stronger and faster than ipsilateral responses, particularly in the left hemisphere. Besides, the acoustic radiations, arcuate fasciculus, middle temporal and angular gyri showed DTI asymmetries with a more complex and advanced microstructure in the left hemisphere, whereas the reverse was observed for the inferior frontal and superior temporal gyri. Finally, after accounting for the age-related variance, we correlated the inter-individual variability in P2 responses and in the microstructural properties of callosal fibers and inferior frontal regions. This study combining dedicated EEG and MRI approaches in infants highlights the complex relation between the functional responses to auditory stimuli and the maturational properties of the corresponding neural network.
... Furthermore, around 3-months of age, infants seem already able to discriminate speech in normal and reversed order, which is evidence sustained by greater functional activation of left temporal areas to speech stimuli and a right frontal activation when processing normal speech, which is found in adults (Dehaene-Lambertz et al., 2002). These data are consistent with fMRI and magneto-encephalography studies showing activation on speech production brain regions when infants are processing speech sounds stimuli (Dehaene-Lambertz et al., 2006;Imada et al., 2006). Regarding ERP studies, 5-month-old infants display an MMN response to different words, either when stressing the first syllable or when stressing the second syllable, showing that at this age they are already able to discriminate between two types of stress pattern in words (Friederici et al., 2007). ...
... Language development around this age period is coordinated with other cognitive behaviors, such as perception and attentional abilities (de Diego-Balaguer et al., 2016). Particularly, Imada et al. (2006) found that from 6 months of age, infants display an increasingly synchronized perceptual-motor brain activity when listening to speech (activation of auditory and motor brain regions simultaneously). This development is associated with the later emergence of first words. ...
Article
This article presents a literature review focusing on the neural and psychophysiological correlates associated with social communication development in infancy. Studies presenting evidence on infants’ brain activity and developments in infant sensory processing, motor, cognitive, language, and emotional abilities are described in regard to the neuropsychophysiological processes underlying the emergence of these specific behavioral milestones and their associations with social communication development. Studies that consider specific age-related characteristics across the infancy period are presented. Evidence suggests that specific neural and physiological signatures accompany age-related social communication development during the first 18 months of life.
... 17 Previous studies from high income countries in infancy and later childhood have found that distinct brain areas are associated with various neurodevelopmental functions, including areas of the frontal lobe associated with cognitive development 8,20,21 and frontal and temporal regions associated with language. 22,23 There is a need for studies to investigate the brain structure-cognition relationship across different socio-cultural contexts 7 and in LMICs where the majority of children at-risk of developmental impairment reside. 24 Neuroimaging from a young age may provide insight into early neurodevelopment processes as well as the relationships with current and future neuropathology. ...
... Our regional findings are consistent with data from functional studies of younger children which show language associated with activation of the temporal lobe as well as frontal regions. 22,66 In adults, there is evidence for a key role of the temporal region (specifically the fusiform gyrus) as well as collateral recruitment of frontal cortex in language. 8,67 Other studies have also found cortical thickness negatively correlated to language (and executive function) in childhood, predominantly in the frontal and temporal cortical regions 21,48,52,68,69 Separately, white matter in frontal and temporal cortices has also been linked to receptive and expressive language development. ...
Article
Full-text available
Magnetic resonance imaging (MRI) is an indispensable tool for investigating brain development in young children and the neurobiological mechanisms underlying developmental risk and resilience. Sub-Saharan Africa has the highest proportion of children at risk of developmental delay worldwide, yet in this region there is very limited neuroimaging research focusing on the neurobiology of such impairment. Furthermore, paediatric MRI imaging is challenging in any setting due to motion sensitivity. Although sedation and anesthesia are routinely used in clinical practice to minimise movement in young children, this may not be ethical in the context of research. Our study aimed to investigate the feasibility of paediatric multimodal MRI at age 2-3 years without sedation, and to explore the relationship between cortical structure and neurocognitive development at this understudied age in a sub-Saharan African setting. A total of 239 children from the Drakenstein Child Health Study, a large observational South African birth cohort, were recruited for neuroimaging at 2-3 years of age. Scans were conducted during natural sleep utilising locally developed techniques. T1-MEMPRAGE and T2-weighted structural imaging, resting state functional MRI, diffusion tensor imaging and magnetic resonance spectroscopy sequences were included. Child neurodevelopment was assessed using the Bayley-III Scales of Infant and Toddler Development. Following 23 pilot scans, 216 children underwent scanning and T1-weighted images were obtained from 167/216 (77%) of children (median age 34.8 months). Furthermore, we found cortical surface area and thickness within frontal regions were associated with cognitive development, and in temporal and frontal regions with language development (beta coefficient ≥0.20). Overall, we demonstrate the feasibility of carrying out a neuroimaging study of young children during natural sleep in sub-Saharan Africa. Our findings indicate that dynamic morphological changes in heteromodal association regions are associated with cognitive and language development at this young age. These proof-of-concept analyses suggest similar links between the brain and cognition as prior literature from high income countries, enhancing understanding of the interplay between cortical structure and function during brain maturation.
... In support of this view, EEG modelling approaches of P2 responses evoked by vowels and tones have proposed cortical sources in the superior temporal, supra-marginal and inferior frontal gyri in two-month-old infants (Basirat et al., 2014;Bristow et al., 2009), or in the anterior cingulate and temporal regions for 6-month-olds (Ortiz--Mantilla et al., 2012). Imada et al. (2006) also reported superior temporal and inferior frontal sources using MEG in 6 and 12 month-olds (Imada et al., 2006). These studies hypothesized that inferior frontal regions are involved in syllable perception, which might explain the relationship we observed between the latency of P2 contralateral responses and the IFG microstructure. ...
... In support of this view, EEG modelling approaches of P2 responses evoked by vowels and tones have proposed cortical sources in the superior temporal, supra-marginal and inferior frontal gyri in two-month-old infants (Basirat et al., 2014;Bristow et al., 2009), or in the anterior cingulate and temporal regions for 6-month-olds (Ortiz--Mantilla et al., 2012). Imada et al. (2006) also reported superior temporal and inferior frontal sources using MEG in 6 and 12 month-olds (Imada et al., 2006). These studies hypothesized that inferior frontal regions are involved in syllable perception, which might explain the relationship we observed between the latency of P2 contralateral responses and the IFG microstructure. ...
Article
Full-text available
Infant brain development incorporates several intermingled mechanisms leading to intense and asynchronous maturation across cerebral networks and functional modalities. Combining electroencephalography (EEG) and diffusion magnetic resonance imaging (MRI), previous studies in the visual modality showed that the functional maturation of the event-related potentials (ERP) during the first postnatal semester relates to structural changes in the corresponding white matter pathways. Here investigated similar issues in the auditory modality. We measured ERPs to syllables in 1- to 6-month-old infants and related them to the maturational properties of underlying neural substrates measured with diffusion tensor imaging (DTI). We first observed a decrease in the latency of the auditory P2, and in the diffusivities in the auditory tracts and perisylvian regions with age. Secondly, we highlighted some of the early functional and structural substrates of lateralization. Contralateral responses to monoaural syllables were stronger and faster than ipsilateral responses, particularly in the left hemisphere. Besides, the acoustic radiations, arcuate fasciculus, middle temporal and angular gyri showed DTI asymmetries with a more complex and advanced microstructure in the left hemisphere, whereas the reverse was observed for the inferior frontal and superior temporal gyri. Finally, after accounting for the age-related variance, we correlated the inter-individual variability in P2 responses and in the microstructural properties of callosal fibers and inferior frontal regions. This study combining dedicated EEG and MRI approaches in infants highlights the complex relation between the functional responses to auditory stimuli and the maturational properties of the corresponding neural network.
... Although it is difficult to determine whether changes in integration of sensorimotor and speech processing in infants are a result of experience or maturation, at least one study (Imada et al., 2006) shows that such integration does develop during the first year. Neonates showed MEG responses to speech sounds only in temporal auditory areas, but at 6 and 12 months activation was observed in both temporal areas and inferior frontal gyrus (Imada et al., 2006). ...
... Although it is difficult to determine whether changes in integration of sensorimotor and speech processing in infants are a result of experience or maturation, at least one study (Imada et al., 2006) shows that such integration does develop during the first year. Neonates showed MEG responses to speech sounds only in temporal auditory areas, but at 6 and 12 months activation was observed in both temporal areas and inferior frontal gyrus (Imada et al., 2006). However, it is not known whether such early-developing cross-modal processing is limited to direct, temporally precise links such as those between mouth movements and speech sounds, or whether it also encompasses less deterministic associations such as the social contingencies we observed. ...
... Similarly to behavioral lateralization, functional asymmetries can be observed before the first year of life (Streri & de Hevia, 2014). Using Magnetoencephalography, Imada et al. (2006) found that lateralization in the left hemisphere of both phonetic perception and motor system is present as early as 6 months of age. After the first year of life, it is shown that children between 1 and 5 years present leftward lateralization of language, but not as prominent and stable as in older children and adults (Kohler et al., 2015). ...
Thesis
Full-text available
Many asymmetries characterize human functioning at the behavioral level, such as handedness (right-handedness, left-handedness, mixed-handedness) and the cerebral level. Atypical laterality is frequently mentioned in scientific literature as part of the clinical picture of several neurodevelopmental and psychiatric disorders. Thus, understanding the neurodevelopmental mechanisms underlying laterality could shed light on the etiology of cognitive and motor difficulties. The goal of this thesis is twofold. Firstly, the theoretical objective was to investigate the involvement of the prenatal environment in the development of handedness. The influence of the vestibular system, fetal presentation, and other perinatal factors related to pregnancy complications and birth stressors were tested. Our results show no influence of the fetal presentation on the subsequent development of handedness. Perinatal adversities such as prematurity, low birthweight, and poor neonatal health reflected by very low Apgar scores however, appear to be risk factors which increase the prevalence of atypical handedness and motor impairments. Secondly, the applied objective was to simultaneously detect the different perceptual biases implicated in graphomotor productions. A 3D-2D transcription graphic task was proposed for identifying global patterns of drawing asymmetries, underpinned by cerebral lateralization, biomechanical constraints, and sociocultural influences. Our results suggest that cerebral lateralization, modulated by handedness and sex, influence graphomotor asymmetries in both children and adults. However, this influence is weaker in adults, which could be due to sociocultural influences.
... De manière plus générale, des études chez le nourrisson et chez l'adulte montrent que l'écoute de parole ou de sons recrute des aires liées à la production vocale (Imada et al., 2006 ;Wilson, et al., 2004 ;Yuen et al., 2010). Il se peut donc qu'une expérience motrice plus fréquente avec une forme de lèvre affecte la préférence pour une vidéo plutôt qu'une autre. ...
Thesis
Dès la naissance, les nourrissons sont exposés à des visages qui parlent. Afin de pouvoir correctement interagir avec leurs congénères, les nouveau-nés vont devoir apprendre à traiter l’information provenant de ceux-ci. Le traitement des visages et le traitement du langage se développent ainsi rapidement durant la première année de vie des nourrissons. Cependant, que ce soit pour les visages ou pour le langage, beaucoup de nourrissons ont un biais d’exposition : ils sont presque exclusivement exposés aux visages de leur type et à leur langue maternelle. Une conséquence de ce biais d’exposition est que les nourrissons vont développer des capacités de discrimination plus fines pour traiter les stimuli natifs que les stimuli non-natifs. Dans la littérature scientifique, ce phénomène appelé rétrécissement perceptif à été mis en évidence de nombreuses fois dans le cadre du développement du langage et dans le cadre du développement du traitement des visages. La trajectoire développementale commune de ces deux systèmes cognitifs durant la première année de vie suggère des interactions entre ces deux systèmes. Cependant, ces interactions sont encore peu étudiées.Le but de la thèse présentée ici était d’étudier les interactions entre les traitements du langage et des visages durant la première année de vie.Dans une première étude, nous avons voulu étudier l’impact du type de visage sur une tâche de correspondance phonémique, sur des nourrissons de 3 et 9 mois. Les nourrissons de 3 mois ne semblent pas faire correspondre une voyelle avec la vidéo d’une locutrice si celle-ci n’est pas d’un type familier. Les résultats de cette étude nous indiquent que dès 3 mois, les nourrissons traitent différemment le signal audio-visuel selon le type du visage qui le produit. Dans une deuxième étude, nous avons voulu évaluer l’impact du type de visage sur la perception de l’effet McGurk, sur des nourrissons de 6, 9 et 12 mois. De plus, nous avons souhaité voir la robustesse de cet effet en l’étudiant de manière interculturelle (en France et au Japon). Nous montrons que la sensibilité à cette illusion audio-visuelle semble dépendante du type de visage. De plus, mis en commun avec nos collègues japonais, nos résultats montrent que la sensibilité à l’effet McGurk peut être conditionné par la culture dans laquelle grandissent les nourrissons. Dans une troisième étude, nous nous sommes intéressés à l’impact des associations entre types de visages et types de langues sur l’attention visuelle des nourrissons de 6, 9 et 12 mois. Cette étude montre qu’à 3 mois, certaines associations de langues et de visages semblent attendus par les nourrissons et plus regardées. Ces associations sont considérées comme congruentes puisqu’elles ne vont pas à l’encontre de ce que les nourrissons rencontrent habituellement dans leur environnement. Dans une quatrième étude, nous avons testé l’impact de ces associations sur la reconnaissance d’individus par des nourrissons de 9 et 12 mois. Nous montrons que les associations congruentes aident la reconnaissance des individus, tandis que les associations incongruentes perturbent la reconnaissance des individus.Ces études renforcent l’idée que d’étroites interactions lient le traitement du langage et le traitement des visages durant la petite enfance. De plus, nous montrons de nouveaux marqueurs du rétrécissement perceptif avant 9 mois. Nous montrons aussi un nouveau moyen expérimental permettant de moduler l’impact du rétrécissement perceptif. Ces travaux de thèse permettent d’élargir nos connaissances concernant le rétrécissement perceptif et ainsi d’en affiner la définition.
... Neurofunctional evidence supports these hypotheses. Imada et al. (2006) investigated the respective activation to CV syllables of the superior temporal cortex (locus of auditory analysis) and inferior frontal regions (involved in speech motor analysis) from 0 to 12 months. They found that, from 6 months onward, activity in the superior temporal regions was progressively mirrored by activity in the inferior frontal regions. ...
Article
Full-text available
Growing evidence shows that early speech processing relies on information extracted from speech production. In particular, production skills are linked to word-form processing, as more advanced producers prefer listening to pseudowords containing consonants they do not yet produce. However, it is unclear whether production affects word-form encoding (the translation of perceived phonological information into a memory trace) and/or recognition (the automatic retrieval of a stored item). Distinguishing recognition from encoding makes it possible to explore whether sensorimotor information is stored in long-term phonological representations (and thus, retrieved during recognition) or is processed when encoding a new item, but not necessarily when retrieving a stored item. In this study, we asked whether speech-related sensorimotor information is retained in long-term representations of word-forms. To this aim, we tested the effect of production on the recognition of ecologically learned, real familiar word-forms. Testing these items allowed to assess the effect of sensorimotor information in a context in which encoding did not happen during testing itself. Two groups of French-learning monolinguals (11- and 14-month-olds) participated in the study. Using the Headturn Preference Procedure, each group heard two lists, each containing 10 familiar word-forms composed of either early-learned consonants (commonly produced by French-learners at these ages) or late-learned consonants (more rarely produced at these ages). We hypothesized differences in listening preferences as a function of word-list and/or production skills. At both 11 and 14 months, babbling skills modulated orientation times to the word-lists containing late-learned consonants. This specific effect establishes that speech production impacts familiar word-form recognition by 11 months, suggesting that sensorimotor information is retained in long-term word-form representations and accessed during word-form processing.
... Indeed, the first study with MEG examined infants' neural responses to speech and nonspeech sounds in newborns, 6-month-olds and 12-month-olds cross-sectionally ( Imada et al., 2006 ). The study provided initial evidence showing the enhancement of neural activation in response to speech, not only in the auditory region (superior temporal) but also in the frontal region (inferior frontal) with increasing age. ...
Article
Full-text available
The ‘sensitive period’ for phonetic learning (∼6-12 months) is one of the earliest milestones in language acquisition where infants start to become specialized in processing speech sounds in their native language. In the last decade, advancements in neuroimaging technologies for infants are starting to shed light on the underlying neural mechanisms supporting this important learning period. The current study reports on a large longitudinal dataset with the aim to replicate and extend on two important questions: 1) what are the developmental changes during the ‘sensitive period’ for native and nonnative speech processing? 2) how does native and nonnative speech processing in infants predict later language outcomes? Fifty-four infants were recruited at 7 months of age and their neural processing of speech was measured using Magnetoencephalography (MEG). Specifically, the neural sensitivity to a native and a nonnative speech contrast was indexed by the mismatch response (MMR). They repeated the measurement again at 11 months of age and their language development was further tracked from 12 months to 30 months of age using the MacArthur-Bates Communicative Development Inventory (CDI). Using an a priori Region-of-Interest (ROI) approach, we observed significant increases for the Native MMR in the left inferior frontal region (IF) and superior temporal region (ST) from 7 to 11 months, but not for the Nonnative MMR. Complementary whole brain comparison revealed more widespread developmental changes for both contrasts. However, only individual differences in the left IF and ST for the Nonnative MMR at 11 months of age were significant predictors of individual vocabulary growth up to 30 months of age. An exploratory machine-learning based analysis further revealed that whole brain time series for both Native and Nonnative contrasts can robustly predict later outcomes, but with very different underlying spatial-temporal patterns. The current study extends our current knowledge and suggests that native and nonnative speech processing may follow different developmental trajectories and utilize different mechanisms that are relevant for later language skills.
... Broca's area is also known for its involvement in motor planning, sequential and hierarchical organisation of behaviours, including syntax [81], tool-use [39,82], and sign language production, thus including manual and oro-facial gestures [83,84]. In infants, speech perception activates Broca's area from very early development on as highlighted in MEG or functional MRI studies [48,85,86]. This activation before the babbling stage suggested that activity of this area is not due to motor learning but might drive the learning of complex sequences [86]. ...
Article
Full-text available
Humans are the only species that can speak. Nonhuman primates, however, share some ‘domain-general’ cognitive properties that are essential to language processes. Whether these shared cognitive properties between humans and nonhuman primates are the results of a continuous evolution [homologies] or of a convergent evolution [analogies] remain difficult to demonstrate. However, comparing their respective underlying structure—the brain—to determinate their similarity or their divergence across species is critical to help increase the probability of either of the two hypotheses, respectively. Key areas associated with language processes are the Planum Temporale, Broca’s Area, the Arcuate Fasciculus, Cingulate Sulcus, The Insula, Superior Temporal Sulcus, the Inferior Parietal lobe, and the Central Sulcus. These structures share a fundamental feature: They are functionally and structurally specialised to one hemisphere. Interestingly, several nonhuman primate species, such as chimpanzees and baboons, show human-like structural brain asymmetries for areas homologous to key language regions. The question then arises: for what function did these asymmetries arise in non-linguistic primates, if not for language per se? In an attempt to provide some answers, we review the literature on the lateralisation of the gestural communication system, which may represent the missing behavioural link to brain asymmetries for language area’s homologues in our common ancestor.
... One idea first proposed by Stevens and Halle (1967), and further advanced by Kuhl et al. (2014), termed "analysis by synthesis," is that the auditory analysis of speech is temporally coupled with synthesis of the motor plans necessary to produce the speech signal (see also Poeppel & Monahan, 2011, for a discussion). Compatible with this view are neuroimaging data with infants showing that premotor planning areas get recruited during concurrent, passive perception of speech (Imada et al., 2006;Kuhl et al., 2014). ...
Article
Full-text available
Purpose Current models of speech development argue for an early link between speech production and perception in infants. Recent data show that young infants (at 4–6 months) preferentially attend to speech sounds (vowels) with infant vocal properties compared to those with adult vocal properties, suggesting the presence of special “memory banks” for one's own nascent speech-like productions. This study investigated whether the vocal resonances (formants) of the infant vocal tract are sufficient to elicit this preference and whether this perceptual bias changes with age and emerging vocal production skills. Method We selectively manipulated the fundamental frequency ( f 0 ) of vowels synthesized with formants specifying either an infant or adult vocal tract, and then tested the effects of those manipulations on the listening preferences of infants who were slightly older than those previously tested (at 6–8 months). Results Unlike findings with younger infants (at 4–6 months), slightly older infants in Experiment 1 displayed a robust preference for vowels with infant formants over adult formants when f 0 was matched. The strength of this preference was also positively correlated with age among infants between 4 and 8 months. In Experiment 2, this preference favoring infant over adult formants was maintained when f 0 values were modulated. Conclusions Infants between 6 and 8 months of age displayed a robust and distinct preference for speech with resonances specifying a vocal tract that is similar in size and length to their own. This finding, together with data indicating that this preference is not present in younger infants and appears to increase with age, suggests that nascent knowledge of the motor schema of the vocal tract may play a role in shaping this perceptual bias, lending support to current models of speech development. Supplemental Material https://doi.org/10.23641/asha.17131805
... Broca is also known for its involvement in motor planning, sequential and hierarchical organisation of behaviours, including syntax (Koechlin & Jubault, 2006), tool-use (Stout and Hecht, 2017) and sign language production including thus manual and oro-facial gestures (Emmorey et al., 2004;Campbell, MacSweeney, & Waters, 2008). In infants, speech perception activates Broca's area from very early development on as highlighted in MEG or functional MRI studies (e.g., Imada et al., 2006;Dehaene-Lambertz et al., 2006;2010). This activation before the babbling stage suggested that activity of this area is not due to motor learning but might drive learning of complex sequences . ...
Preprint
Full-text available
Humans are the only species that can speak. Nonhuman primates, however, share some "domain-general" cognitive properties that are essential to language processes. Whether these shared cognitive properties of humans and nonhuman primates are the result of a continuous or convergent evolution can be investigated by comparing their respective underlying structure: the brain. Key areas associated with language processes are the Planum Temporale, Broca's Area, the Arcuate Fasciculus, Cingulate Sulcus, The Insula, Superior Temporal Sulcus, the Inferior Parietal lobe and the Central Sulcus.These structures share a fundamental feature: They are functionally and also structurally specialised to one hemisphere. Interestingly, several nonhuman primate species, such as chimpanzees and baboons, show human-like structural brain asymmetries for areas homologous to these key-markers of functional language lateralisation. The question arises, then, for what function did these asymmetries arise in non-linguistic primates, if not for language per se? In an attempt to provide some answers, we review the literature on the lateralisation of the gestural communication system, which may represent the missing behavioural link to brain asymmetries for language area's homologues in our common ancestor.
... MEG technology started to allow a closer examination of the underlying neural sources of the mismatch response (MMR). Indeed, the first study with MEG examined infants' neural responses to speech and nonspeech sounds in newborns, 6-month-olds and 12-month-olds crosssectionally (Imada et al., 2006). The study provided initial evidence showing the enhancement of neural activation in response to speech, not only in the auditory region (superior temporal) but also in the frontal region (inferior frontal) with increasing age. ...
Preprint
The 'sensitive period' for phonetic learning (~6-12 months) is one of the earliest milestones in language acquisition where infants start to become specialized in processing speech sounds in their native language. In the last decade, advancements in neuroimaging technologies for infants are starting to shed light on the underlying neural mechanisms supporting this important learning period. The current study reports on the largest longitudinal dataset to date with the aim to replicate and extend on two important questions: 1) what are the developmental changes during the 'sensitive period' for native and nonnative speech processing? 2) how does native and nonnative speech processing in infants predict later language outcomes? Fifty-four infants were recruited at 7 months of age and their neural processing of speech was measured using Magnetoencephalography (MEG). Specifically, the neural sensitivity to a native and a nonnative speech contrast was indexed by the mismatch response (MMR). They repeated the measurement again at 11 months of age and their language development was further tracked from 12 months to 30 months of age using the MacArthur-Bates Communicative Development Inventory (CDI). Using an a prior Region-of-Interest (ROI) approach, we observed significant increases for the Native MMR in the left inferior frontal region (IF) and superior temporal region (ST) from 7 to 11 months, but not for the Nonnative MMR. Complementary whole brain comparison revealed more widespread developmental changes for both contrasts. However, only individual differences in the left IF and ST for the Nonnative MMR at 11 months of age were significant predictors of individual vocabulary growth up to 30 months of age. An exploratory machine-learning based analysis further revealed that whole brain MMR for both Native and Nonnative contrasts can robustly predict later outcomes, but with very different underlying spatial-temporal patterns. The current study extends our current knowledge and suggests that native and nonnative speech processing may follow different developmental trajectories and utilize different mechanisms that are relevant for later language skills.
... The colour of the channel's (Ch) circle is based on its activity level: inactive (white), active (amber), and very active (red). 44,45 , whereas activation in both temporal and prefrontal cortices has been reported in response to speechlike sounds 46,47 . Considering that in the present work, the auditory cue presented to infants was a non-speech stimulus, our results are in line with the literature and suggest that interregional interactions between the temporal and prefrontal cortex might be specific to speech-like sounds 46,47 . ...
Article
Full-text available
In the last decades, non-invasive and portable neuroimaging techniques, such as functional near infrared spectroscopy (fNIRS), have allowed researchers to study the mechanisms underlying the functional cognitive development of the human brain, thus furthering the potential of Developmental Cognitive Neuroscience (DCN). However, the traditional paradigms used for the analysis of infant fNIRS data are still quite limited. Here, we introduce a multivariate pattern analysis for fNIRS data, xMVPA, that is powered by eXplainable Artificial Intelligence (XAI). The proposed approach is exemplified in a study that investigates visual and auditory processing in six-month-old infants. xMVPA not only identified patterns of cortical interactions, which confirmed the existent literature; in the form of conceptual linguistic representations, it also provided evidence for brain networks engaged in the processing of visual and auditory stimuli that were previously overlooked by other methods, while demonstrating similar statistical performance. Andreu-Perez et al developed a multivariate pattern analysis for fNIRS data (xMVPA), which is powered by eXplainable Artificial Intelligence (XAI). They demonstrated its application in the context of investigating visual and auditory processing in six-month-old infants and showed that it provided insight into patterns of cortical networks.
... For instance, neonates (0-3 days old) show bilateral brain activation in temporal regions when they begin to hear speech (May, Byers-Heinlein, Gervain, & Werker, 2011). Shortly thereafter, 5-dayold neonates' brain activity is confined to left temporal regions (Imada et al., 2006). As infants gain greater experience with their linguistic environment, brain activity to speech processing includes left frontal and temporal regions (Dehaene-Lambertz et al., 2006). ...
Thesis
Full-text available
Early life experiences are thought to alter children’s cognition and brain development, yet the precise nature of these changes remains largely unknown. Research has shown that bilinguals’ languages are simultaneously active, and their parallel activation imposes an increased demand for attentional mechanisms even when the intention is to use one of their languages (cf. Kroll & Bialystok, 2013). Theoretical frameworks (Adaptive Control hypothesis; Green & Abutalebi, 2013) propose that daily demands of dual-language experiences impact the organization of neural networks. To test this hypothesis, this dissertation used functional Near-Infrared Spectroscopy (fNIRS) to image brain regions in young monolingual and bilingual children (53 English monolinguals, 40 Spanish-English bilinguals; ages 7-9) while they performed a verbal attention task assessing phonological interference and a non-verbal attention task assessing attentional networks. The results did not reveal differences in behavioral performance between bilinguals and monolinguals, however, the neuroimaging findings revealed three critical differences between the groups: (i) bilingual children engaged less brain activity in left frontal regions, than monolinguals, when managing linguistic competitors in one language thus suggesting efficient processing; (ii) bilinguals showed overall greater brain activity, than monolinguals, in left fronto-parietal regions for attentional networks (i.e., alerting, orienting, and executive); and (iii) bilinguals’ brain activity in left fronto-parietal regions during the Executive attentional network was associated with better language abilities. Taken together, these findings suggest that attentional mechanisms and language processes both interact in bilinguals’ left fronto-parietal regions to impact the dynamics of brain plasticity during child development. This work informs neuro-cognitive theories on how early life experiences such as bilingualism impact brain development and plasticity.
... 新生儿已出现与成人类似的语 音处理左侧化优势 [4,5] . 脑磁研究和近红外研究进一步 识别出颞上区和额下区是婴儿处理声调、和声和音节 的重要脑区, 提示包括额叶和颞叶在内的时空大尺度 脑网络是婴儿语音处理的重要脑基础 [6,7] . ...
... Heritage grammars have unique properties that set them apart from those of other speaker groups. Traditionally, differences between heritage and non-heritage grammars have been interpreted as the product of simplification, attrition or incomplete acquisition affecting comprehension and production (Imada, Zhang, Cheour, Taulu, Ahonen & Kuhl, 2006;Kuhl & Rivera-Gaxiola, 2008;Montrul, 2007Montrul, , 2009Montrul, , 2015Polinsky, 2011;Silva-Corvalán, 1994). Studies assessing the grammatical knowledge of speakers of Spanish as a heritage language, have largely focused on the Spanish subjunctive mood, likely because prescriptively mastering the subjunctive is considered a significant landmark of acquisition, and an indicator of high proficiency (Collentine, 2003). ...
Thesis
Full-text available
ABSTRACT Studies assessing the grammatical knowledge of speakers of Spanish as a heritage language have largely focused on the Spanish subjunctive mood and have concluded, almost unanimously, that heritage speakers’ knowledge of the Spanish subjunctive is non-native-like and subject to incomplete acquisition. However, there is also evidence that while different, heritage speakers’ linguistic knowledge is by no means deficient. The goal of the present dissertation is to achieve a holistic understanding of the nature of grammars of heritage speakers and to contribute to a theory of processing in heritage language contexts that has greater explanatory adequacy. To this end, this dissertation examines knowledge of the Spanish subjunctive in heritage speakers who live in a long-standing bilingual community in Albuquerque, New Mexico, in comparison to a group of Spanish-dominant bilinguals born in Mexico. The dissertation sets out to provide (1) an evidence-based characterization of heritage speakers by using a sociolinguistic questionnaire which, along with PCA, examined the language-experience related factors that best explain the variability in the processing of the subjunctive mood in this population; and (2) an examination of heritage speakers’ and Spanish-dominant bilinguals’ processing of the Spanish subjunctive during online comprehension and production by means of psycholinguistic experiments that integrated corpus data into their design. Results indicated that, both in comprehension and production, the current group of heritage speakers was sensitive to the lexical and structural conditioning of mood selection, and that the performance of heritage speakers and Spanish-dominant bilinguals converged on the same results and trends. All participants showed nuanced knowledge of the morphosyntactic factors that modulate the conditioning of mood selection, as suggested by the fact that linguistic factors such as frequency and proficiency also modulated their sensitivity. In addition, based on the PCA conducted, the role of three sociolinguistic variables was examined: use of the heritage language, language entropy, and identification with the heritage language. As predicted, results indicated that sensitivity to the lexical and structural conditioning of mood selection was greater for heritage speakers who: (1) used the heritage language more often on average, (2) used the heritage language in more diverse contexts, and (3) felt more identified with the heritage language. The findings highlight that factors such as the community examined and the ecological validity of the materials used are crucial. In addition, they underscore the importance of triangulating both comprehension and production experimental data, and employing multiple explanatory variables for a more comprehensive approach to complex and highly variable systems such as heritage grammars.
... The IFG, which overlaps with Broca's area, is demonstrably crucial to speech perception and production (Hickok and Poeppel, 2007). More recently, the advancement in infant MEG neuroimaging has allowed us to directly observe the change in IFG during phonetic learning (Imada et al., 2006;Kuhl et al., 2014). Specifically, pre-verbal young infants are engaging motor areas (e.g., IFG, Cerebellum, see Kuhl et al., 2014) of the brain as they listen to speech, long before they can actually produce speech; and such engagement is interpreted as infants' attempting to simulate the motor gestures required to produce the speech sounds they hear. ...
Article
Full-text available
The ‘sensitive period’ for phonetic learning posits that between 6 and 12 months of age, infants’ discrimination of native and nonnative speech sounds diverge. Individual differences in this dynamic processing of speech have been shown to predict later language acquisition up to 30 months of age, using parental surveys. Yet, it is unclear whether infant speech discrimination could predict longer-term language outcome and risk for developmental speech-language disorders, which affect up to 16% of the population. The current study reports a prospective prediction of speech-language skills at a much later age—6 years-old—from the same children’s nonnative speech discrimination at 11 months-old, indexed by MEG mismatch responses. Children’s speech-language skills at 6 were comprehensively evaluated by a speech-language pathologist in two ways: individual differences in spoken grammar, and the presence versus absence of speech-language disorders. Results showed that the prefrontal MEG mismatch response at 11 months not only significantly predicted individual differences in spoken grammar skills at 6 years, but also accurately identified the presence versus absence of speech-language disorders, using a machine-learning classification. These results represent new evidence that advance our theoretical understanding of the neurodevelopmental trajectory of language acquisition and early risk factors for developmental speech-language disorders.
... Source estimation has not been frequently reported in the infant EEG literature, with the notable exception of a few research groups that used volumetric approaches such as CDR with realistic head models built from MRIs of head-size-matched individuals ( Xie and Richards, 2017 ) or from age-matched MRI averages ( Lunghi et al., 2019 ), or using ECD with a four-shell ellipsoidal head model ( Ortiz-Mantilla et al., 2019 ). Source imaging is more frequently used in magnetoencephalography (MEG) ( Kao and Zhang, 2019 ), but it generally relies on over simplistic spherical models to overcome the absence of realistic head models ( Imada et al., 2006 ;Kuhl et al., 2014 ) or it uses custom-built subjectspecific head models that are not reusable by the research community ( Ramírez et al., 2017 ;Travis et al., 2011 ). ...
Article
Full-text available
Electroencephalographic (EEG) source reconstruction is a powerful approach that allows anatomical localization of electrophysiological brain activity. Algorithms used to estimate cortical sources require an anatomical model of the head and the brain, generally reconstructed using magnetic resonance imaging (MRI). When such scans are unavailable, a population average can be used for adults, but no average surface template is available for cortical source imaging in infants. To address this issue, we introduce a new series of 13 anatomical models for subjects between zero and 24 months of age. These templates are built from MRI averages and boundary element method (BEM) segmentation of head tissues available as part of the Neurodevelopmental MRI Database. Surfaces separating the pia mater, the gray matter, and the white matter were estimated using the Infant FreeSurfer pipeline. The surface of the skin as well as the outer and inner skull surfaces were extracted using a cube marching algorithm followed by Laplacian smoothing and mesh decimation. We post-processed these meshes to correct topological errors and ensure watertight meshes. Source reconstruction with these templates is demonstrated and validated using 100 high-density EEG recordings from 7-month-old infants. Hopefully, these templates will support future studies on EEG-based neuroimaging and functional connectivity in healthy infants as well as in clinical pediatric populations.
... Juveniles have a species-specific predisposition and listen to the sounds produced by their parents [9]. Then, during the sensorimotor phase, the young birds start to vocalise, initially producing babbling sounds [11]- [13] and then adapting their vocal output to imitate previously heard vocalisations. Finally, the produced vocalisation becomes more and more stereotyped and vocal plasticity significantly drops. ...
Article
Full-text available
Sensorimotor learning represents a challenging problem for natural and artificial systems. Several computational models have been proposed to explain the neural and cognitive mechanisms at play in the brain. In general, these models can be decomposed in three common components: a sensory system, a motor control device and a learning framework. The latter includes the architecture, the learning rule or optimisation method, and the exploration strategy used to guide learning. In this review, we focus on imitative vocal learning, that is exemplified in song learning in birds and speech acquisition in humans. We aim to synthesise, analyse and compare the various models of vocal learning that have been proposed, highlighting their common points and differences. We first introduce the biological context, including the behavioural and physiological hallmarks of vocal learning and sketch the neural circuits involved. Then, we detail the different components of a vocal learning model and how they are implemented in the reviewed models.
... In the group of LRA infants, brain response to speech was greater in the bilateral anterior ROIs compared to the posterior ROIs. Our findings are consistent with previous work that has shown that speech activates regions of the temporal and frontal lobes in 6-month-old infants (Blasi et al., 2015;Imada et al., 2006). It is likely that we did not observe Table 3 Results of exploratory mixed factorial ANOVA (LRA, HRA-, and HRA+ groups). ...
Article
Full-text available
Infants at high familial risk for autism spectrum disorder (ASD) are at increased risk for language impairments. Studies have demonstrated that atypical brain response to speech is related to language impairments in this population, but few have examined this relation longitudinally. We used functional near-infrared spectroscopy (fNIRS) to investigate the neural correlates of speech processing in 6-month-old infants at high (HRA) and low risk (LRA) for autism. We also assessed the relation between brain response to speech at 6-months and verbal developmental quotient (VDQ) scores at 24-months. LRA infants exhibited greater brain response to speech in bilateral anterior regions of interest (ROIs) compared to posterior ROIs, while HRA infants exhibited similar brain response across all ROIs. Compared to LRA infants, HRA+ infants who were later diagnosed with ASD had reduced brain response in bilateral anterior ROIs, while HRA- infants who were not later diagnosed with ASD had increased brain response in right posterior ROI. Greater brain response in left anterior ROI predicted VDQ scores for LRA infants only. Findings highlight the importance of studying HRA+ and HRA- infants separately, and implicate a different, more distributed neural system for speech processing in HRA infants that is not related to language functioning.
... Heritage grammars are considered to have unique properties that set them apart from those of other speaker groups. Traditionally, differences between heritage and non-heritage grammars have been interpreted as the product of simplification, attrition or incomplete acquisition impacting heritage speakers' ultimate attainment (Domínguez et al. 2019;Imada et al. 2006;Kuhl and Rivera-Gaxiola 2008;Montrul 2007Montrul , 2009Montrul , 2016Polinsky 2008Polinsky , 2011Polinsky and Scontras 2019;Silva-Corvalán 1994). In this paper, we argue that simplification, attrition or incomplete acquisition place too much emphasis on the resulting "end state" grammars of heritage speakers and, while prevalent in the field of heritage language research, have been largely contested (Bayram et al. 2019;Domínguez et al. 2019;Felser 2019;Flores and Rinke 2019;Kupisch and Rothman 2018;Meisel 2019;Otheguy 2013;Polinsky and Scontras 2019;Perez-Cortes et al. 2019;Putnam and Sanchez 2013). ...
Article
Full-text available
In this paper, we argue that usage-based approaches to grammar, which specify how linguistic experience leads to grammatical knowledge through the interplay of cognitive, linguistic and social factors, have a central role to play in contributing to a unified theory of heritage language acquisition and processing with much greater explanatory adequacy. We discuss how this approach (1) offers solutions to long-standing problems in the field of heritage language research, (2) links phenomena that have been explained under diverging theoretical perspectives and (3) leads to new hypotheses and testable predictions about what we can expect heritage speakers acquire from their input. We conclude that usage-based approaches are crucial to move away from deficit-oriented perspectives on heritage grammars by taking into consideration how variation in sociolinguistic experience gives rise to differences in how heritage speakers acquire and use their language.
... In typically developing children without any language disabilities, native-language phonetic perception is thought to represent a critical step in initial language learning and promote language growth (Kuhl, 2010;Kuhl et al., 2006;Tsao, Liu, & Kuhl, 2004). Using magnetoencephalography (MEG), brain responses to human voices have been studied as a physiological indicator of language acquisition (Imada et al., 2006;Kuhl, 2010;Kuhl, Ramirez, Bosseler, Lin, & Imada, 2014;Yoshimura et al., 2012Yoshimura et al., , 2014. Therefore, in children with language disorders, the majority of previous studies have also focused on brain responses to human voices. ...
Article
Full-text available
Introduction In the early development of human infants and toddlers, remarkable changes in brain cortical function for auditory processing have been reported. Knowing the maturational trajectory of auditory cortex responses to human voice in typically developing young children is crucial for identifying voice processing abnormalities in children at risk for neurodevelopmental disorders and language impairment. An early prominent positive component in the cerebral auditory response in newborns has been reported in previous electroencephalography and magnetoencephalography (MEG) studies. However, it is not clear whether this prominent component in infants less than 1 year of age corresponds to the auditory P1m component that has been reported in young children over 2 years of age. Methods To test the hypothesis that the early prominent positive component in infants aged 0 years is an immature manifestation of P1m that we previously reported in children over 2 years of age, we performed a longitudinal MEG study that focused on this early component and examined the maturational changes over three years starting from age 0. Five infants participated in this 3‐year longitudinal study. Results This research revealed that the early prominent component in infants aged 3 month corresponded to the auditory P1m component in young children over 2 years old, which we had previously reported to be related to language development and/or autism spectrum disorders. Conclusion Our data revealed the development of the auditory‐evoked field in the left and right hemispheres from 0‐ to 3‐year‐old children. These results contribute to the elucidation of the development of brain functions in infants.
... Source estimation has not been frequently reported in the infant EEG literature, with the notable exception of a few research groups that used volumetric approaches such as CDR with realistic head models built from MRIs of head-size-matched individuals ( Xie and Richards, 2017 ) or from age-matched MRI averages ( Lunghi et al., 2019 ), or using ECD with a four-shell ellipsoidal head model ( Ortiz-Mantilla et al., 2019 ). Source imaging is more frequently used in magnetoencephalography (MEG) ( Kao and Zhang, 2019 ), but it generally relies on over simplistic spherical models to overcome the absence of realistic head models ( Imada et al., 2006 ;Kuhl et al., 2014 ) or it uses custom-built subjectspecific head models that are not reusable by the research community ( Ramírez et al., 2017 ;Travis et al., 2011 ). ...
Preprint
Full-text available
Electroencephalographic (EEG) source reconstruction is a powerful approach that helps to unmix scalp signals, mitigates volume conduction issues, and allows anatomical localization of brain activity. Algorithms used to estimate cortical sources require an anatomical model of the head and the brain, generally reconstructed using magnetic resonance imaging (MRI). When such scans are unavailable, a population average can be used for adults, but no average surface template is available for cortical source imaging in infants. To address this issue, this paper introduces a new series of 12 anatomical models for subjects between zero and 24 months of age. These templates are built from MRI averages and volumetric boundary element method segmentation of head tissues available as part of the Neurodevelopmental MRI Database. Surfaces separating the pia mater, the gray matter, and the white matter were estimated using the Infant FreeSurfer pipeline. The surface of the skin as well as the outer and inner skull surfaces were extracted using a cube marching algorithm followed by Laplacian smoothing and mesh decimation. We post-processed these meshes to correct topological errors and ensure watertight meshes. The use of these templates for source reconstruction is demonstrated and validated using 100 high-density EEG recordings in 7-month-old infants. Hopefully, these templates will support future studies based on EEG source reconstruction and functional connectivity in healthy infants as well as in clinical pediatric populations. Particularly, they should make EEG-based neuroimaging more feasible in longitudinal neurodevelopmental studies where it may not be possible to scan infants at multiple time points. Graphical abstract Highlights Twelve surface templates for infants in the 0-2 years old range are proposed These templates can be used for EEG source reconstruction using existing toolboxes A relatively modest impact of age differences was found in this age range Correlation analysis confirms increasing source differences with age differences Sources reconstructed with infants versus adult templates significantly differ
... The SQUID sensors positioned within the MEG helmet evaluate the minute magnetic fields associated with electrical currents generated by the brain while performing sensory, motor, or cognitive tasks. MEG facilitates the exact location of the neural currents accountable for magnetic field sources [88], [89] the use of modern head monitoring methods and MEG to illustrate phonetic recognition in newborns and infants in their first year of life.. ...
Preprint
This paper reviews the main perspectives of language acquisition and language comprehension. In language acquisition, we have reviewed the different types of language acquisitions like first language acquisition, second language acquisition, sign language acquisition and skill acquisition. The experimental techniques for neurolinguistic acquisition detection is also discussed. The findings of experiments for acquisition detection is also discussed, it includes the region of brain activated after acquisition. Findings shows that the different types of acquisition involve different regions of the brain. In language comprehension, native language comprehension and bilingual's comprehension has been considered. Comprehension involve different brain regions for different sentence or word comprehension depending upon their semantic and syntax. The different fMRIEEG analysis techniques (statistical or graph theoretical) are also discoursed in our review. Tools for neurolinguistics computations are also discussed.
... Most of the MEG studies examining how language is processed during the first year of life have focused on localizing language-related brain regions beyond primary auditory cortex. [59][60][61][62][63] Compared with other neuroimaging techniques, MEG is unique in that it can be used to study higher-level cognitive processes (e.g., language processes) throughout the whole brain in awake infants and young children. Using distributed source modeling (for example, minimum norm estimation 8 ), Broca area and cerebellum have been found to be involved in passive encoding speech sounds as well as speech production in addition to auditory cortex. ...
Article
Magnetoencephalography (MEG) is a noninvasive neuroimaging technique that measures the electromagnetic fields generated by the human brain. This article highlights the benefits that pediatric MEG has to offer to clinical practice and pediatric research, particularly for infants and young children; reviews the existing literature on adult MEG systems for pediatric use; briefly describes the few pediatric MEG systems currently extant; and draws attention to future directions of research, with focus on the clinical use of MEG for patients with drug-resistant epilepsy.
... Perisylvian asymmetries have been found prenatally in both neuroimaging (Kasprian et al., 2011) and postmortem studies (Wada, Clarke, & Hamm, 1975). In line with these early structural biases, speech processing has been found to be leftward asymmetrical as early as the age of 3 months in the superior temporal gyrus and angular gyrus (Dehaene-Lambertz, Dehaene, & Hertz-Pannier, 2002), and at the age of 6 months in the IFG (Imada et al., 2006). However, longitudinal assessment of language dominance revealed that, at least from 5 to 11 years old, language lateralization increases as a function of age, especially in Broca's area and Wernicke's area (Szaflarski et al., 2006). ...
Article
Full-text available
Music processing and right hemispheric language lateralization share a common network in the right auditory cortex and its frontal connections. Given that the development of hemispheric language dominance takes place over several years, this study tested whether musicianship could increase the probability of observing right language dominance in left-handers. Using a classic fMRI language paradigm, results showed that atypical lateralization was more predominant in musicians (40%) than in nonmusicians (5%). Comparison of left-handers with typical left and atypical right lat-eralization revealed that: (a) atypical cases presented a thicker right pars triangularis and more gyrified left Heschl's gyrus; and (b) the right pars triangularis of atypical cases showed a stronger intra-hemispheric functional connectivity with the right angular gyrus, but a weaker interhemispheric functional connectivity with part of the left Broca's area. Thus, musicianship is the first known factor related to a higher prevalence of atypical language dominance in healthy left-handed individuals. We suggest that differences in the frontal and temporal cortex might act as shared predisposing factors to both musicianship and atypical language lateralization.
... As development unfolds, parietal and frontal cortices, including LIFG, mature and connect to other regions, such as the pMSTG and ATL, becoming increasingly more engaged in semantics and in speech processing. MEG and fMRI studies have shown that initially only temporal and right frontal activations are found in response to speech in babies, while LIFG activity increases later, between 6 and 12 months of age (Dehaene-Lambertz, Dehaene, & Hertz-Pannier, 2002;Imada et al., 2006). At 8 months, perceptual binding effects are also first observed (e.g., γ band responses to illusory Kanizsa figures, similar to those found in adults) (Csibra, Davis, Spratling, & Johnson, 2000). ...
Article
According to traditional linguistic theories, the construction of complex meanings relies firmly on syntactic structure-building operations. Recently, however, new models have been proposed in which semantics is viewed as being partly autonomous from syntax. In this paper, we discuss some of the developmental implications of syntax-based and autonomous models of semantics. We review event-related brain potential (ERP) studies on semantic processing in infants and toddlers, focusing on experiments reporting modulations of N400 amplitudes using visual or auditory stimuli and different temporal structures of trials. Our review suggests that infants can relate or integrate semantic information from temporally overlapping stimuli across modalities by 6 months of age. The ability to relate or integrate semantic information over time, within and across modalities, emerges by 9 months. The capacity to relate or integrate information from spoken words in sequences and sentences appears by 18 months. We also review behavioral and ERP studies showing that grammatical and syntactic processing skills develop only later, between 18 and 32 months. These results provide preliminary evidence for the availability of some semantic processes prior to the full developmental emergence of syntax: non-syntactic meaning-building operations are available to infants, albeit in restricted ways, months before the abstract machinery of grammar is in place. We discuss this hypothesis in light of research on early language acquisition and human brain development.
... The algorithm attempts to reproduce a voice signal by optimizing an articulator synthesizer and, in this way, learns the articulator characteristics of the speaker who trained the system. The idea is inspired by the theory of the mirror neurons that there are neurons in the human brain, linked to the control of human articulators, which are activated after listening to vocal stimuli [27,28]. This means that human perception of speech is related to its generation. ...
Article
Full-text available
In this paper we describe a novel algorithm, inspired by the mirror neuron discovery, to support automatic learning oriented to advanced man-machine interfaces. The algorithm introduces several points of innovation, based on complex metrics of similarity that involve different characteristics of the entire learning process. In more detail, the proposed approach deals with an humanoid robot algorithm suited for automatic vocalization acquisition from a human tutor. The learned vocalization can be used to multi-modal reproduction of speech, as the articulatory and acoustic parameters that compose the vocalization database can be used to synthesize unrestricted speech utterances and reproduce the articulatory and facial movements of the humanoid talking face automatically synchronized. The algorithm uses fuzzy articulatory rules, which describe transitions between phonemes derived from the International Phonetic Alphabet (IPA), to allow simpler adaptation to different languages, and genetic optimization of the membership degrees. Large experimental evaluation and analysis of the proposed algorithm on synthetic and real data sets confirms the benefits of our proposal. Indeed, experimental results show that the vocalization acquired respects the basic phonetic rules of Italian languages and that subjective results show the effectiveness of multi-modal speech production with automatic synchronization between facial movements and speech emissions. The algorithm has been applied to a virtual speaking face but it may also be used in mechanical vocalization systems as well.
Book
Heritage speakers are native speakers of a minority language they learn at home, but due to socio-political pressure from the majority language spoken in their community, their heritage language does not fully develop. In the last decade, the acquisition of heritage languages has become a central focus of study within linguistics and applied linguistics. This work centres on the grammatical development of the heritage language and the language learning trajectory of heritage speakers, synthesizing recent experimental research. The Acquisition of Heritage Languages offers a global perspective, with a wealth of examples from heritage languages around the world. Written in an accessible style, this authoritative and up-to-date text is essential reading for professionals, students, and researchers of all levels working in the fields of sociolinguistics, psycholinguistics, education, language policies and language teaching.
Article
Objective: Reorganization of the language network from typically left-lateralized frontotemporal regions to bilaterally distributed or right-lateralized networks occurs in anywhere from 25%-30% of patients with focal epilepsy. In patients who have been recently diagnosed with epilepsy, an important question remains as to whether it is the presence of seizures or the underlying epilepsy etiology that leads to atypical language representations. This question becomes even more interesting in pediatric samples, where the typical developmental processes of the language network may confer more variability and plasticity in the language network. We assessed a carefully selected cohort of children with recent-onset epilepsy to examine whether it is the effects of seizures or their underlying cause that leads to atypical language lateralization. Methods: We used functional magnetic resonance imaging (fMRI) to compare language laterality in children with recently diagnosed focal unaware epilepsy and age-matched controls. Age at epilepsy onset (age 4 to 6 years vs age 7 to 12 years) was also examined to determine if age at onset influenced laterality. Results: The majority of recent-onset patients and controls exhibited left-lateralized language. There was a significant interaction such that the relationship between epilepsy duration and laterality differed by age at onset. In children with onset after age 6, a longer duration of epilepsy was associated with less left-lateralized language dominance. In contrast, in children with onset between 4 and 6 years of age, a longer duration of epilepsy was not associated with less left language dominance. Significance: Our results demonstrate that although language remained largely left-lateralized in children recently diagnosed with epilepsy, the impact of seizure duration depended on age at onset, indicating that the timing of developmental and disease factors are important in determining language dominance.
Article
Full-text available
Human neonates can discriminate phonemes, but the neural mechanism underlying this ability is poorly understood. Here we show that the neonatal brain can learn to discriminate natural vowels from backward vowels, a contrast unlikely to have been learnt in the womb. Using functional near-infrared spectroscopy, we examined the neuroplastic changes caused by 5 h of postnatal exposure to random sequences of natural and reversed (backward) vowels (T1), and again 2 h later (T2). Neonates in the experimental group were trained with the same stimuli as those used at T1 and T2. Compared with controls, infants in the experimental group showed shorter haemodynamic response latencies for forward vs backward vowels at T1, maximally over the inferior frontal region. At T2, neural activity differentially increased, maximally over superior temporal regions and the left inferior parietal region. Neonates thus exhibit ultra-fast tuning to natural phonemes in the first hours after birth.
Article
Previous research has suggested that top-down sensory prediction facilitates, and may be necessary for, efficient transmission of information in the brain. Here we related infants’ vocabulary development to the top-down sensory prediction indexed by occipital cortex activation to the unexpected absence of a visual stimulus previously paired with an auditory stimulus. The magnitude of the neural response to the unexpected omission of a visual stimulus was assessed at the age of 6 months with functional near-infrared spectroscopy (fNIRS) and vocabulary scores were obtained using the MacArthur-Bates Communicative Development Inventory (MCDI) when infants reached the age of 12 months and 18 months, respectively. Results indicated significant positive correlations between this predictive neural signal at 6 months and MCDI expressive vocabulary scores at 12 and 18 months. These findings provide additional and robust support for the hypothesis that top-down prediction at the neural level plays a key role in infants’ language development.
Article
This paper quantifies the extent to which infants can perceive audio-visual congruence for speech information and assesses whether this ability changes with native-language exposure over time. A hierarchical Bayesian robust regression model of 92 separate effect sizes extracted from 24 studies indicates a moderate effect size in a positive direction (0.35, CI [0.21: 0.50]). This result suggests that infants possess a robust ability to detect audio-visual congruence for speech. Moderator analyses, moreover, suggest that infants’ audio-visual matching ability for speech emerges at an early point in the process of language acquisition and remains stable for both native and non-native speech throughout early development. A sensitivity analysis of the meta-analytic data, however, indicates that a moderate publication bias for significant results could shift the lower credible interval to include null effects. Based on these findings, we outline recommendations for new lines of enquiry and suggest ways to improve the replicability of results in future investigations.
Preprint
Full-text available
Understanding the trends and predictors of attrition rate, or the proportion of collected data that is excluded from the final analyses, is important for accurate research planning, assessing data integrity, and ensuring generalizability. In this pre-registered meta-analysis, we reviewed 182 publications in infant (0-24 months) functional near-infrared spectroscopy (fNIRS) research published from 1998 to April 9, 2020 and investigated the trends and predictors of attrition. The average attrition rate was 34.23% among 272 experiments across all 182 publications. Among a subset of 136 experiments which reported the specific reasons of subject exclusion, 21.50% of the attrition were infant-driven while 14.21% were signal-driven. Subject characteristics (e.g., age) and study design (e.g., fNIRS cap configuration, block/trial design, and stimulus type) predicted the total and subject-driven attrition rates, suggesting that modifying the recruitment pool or the study design can meaningfully reduce the attrition rate in infant fNIRS research. Based on the findings, we established guidelines on reporting the attrition rate for scientific transparency and made recommendations to minimize the attrition rates. We also launched an attrition rate calculator ( LINK ) to aid with research planning. This research can facilitate developmental cognitive neuroscientists in their quest toward increasingly rigorous and representative research. Highlights Average attrition rate in infant fNIRS research is 34.23% 21.50% of the attrition are infant-driven (e.g., inattentiveness) while 14.21% are signal-driven (e.g., poor optical contact) Subject characteristics (e.g., age) and study design (e.g., fNIRS cap configuration, block/trial design, and stimulus type) predict the total and infant-driven attrition rates Modifying the recruitment pool or the study design can meaningfully reduce the attrition rate in infant fNIRS research
Article
Full-text available
ANDRADE, I. R.; FRANÇA, A. I.; SAMPAIO, T. O. M. Dinâmicas de interação nature-nurture: do imprinting à reciclagem neuronal. ReVEL, vol. 16, n. 31, 2018. [www.revel.inf.br] RESUMO: A partir de relatos de estudos sobre o desenvolvimento cognitivo das espécies, aqui será proposto um continuum classificatório das diferentes cognições começando pelas espontâneas, já prontas ao nascimento, passando por outras cognições maturacionais, que são incrementadas dentro de janelas de tempo estabelecidas pela espécie a partir da exposição ao meio. No fim do continuum, estão as cognições adquiridas que atuam em tarefas aprendidas a partir de instrução explícita ou imitação. A habilidade cognitiva de ler, desenvolvida pelo cérebro humano, será enfocada mais aprofundadamente à luz da neotenia, alongamento da infância, e da hipótese da reciclagem neuronal, que prevê a cooptação de cognições já estabelecidas para a execução de novas tarefas criadas pela sociedade, como a leitura. ABSTRACT: From studies on the cognitive development of different species, we propose a continuum to classify different cognitions on the basis of its acquisition mode. It starts from the most spontaneous ones, that are ready from birth. Then the continuum progresses through maturational cognitions, that are implemented at restricted species-specific time-frames. Finally, the continuum features optional cognitions used to cope with tasks acquired through explicit instruction or imitation. The cognitive ability to read, developed by the human brain, will be more deeply focused under the light of neoteny, an enlengthening of infancy, and under the neuronal recycling hypothesis, which predicts the co-optation of already established cognitions for the execution of new tasks created by society, such as reading.
Article
Full-text available
It is well established that higher cognitive ability is associated with larger brain size. However, individual variation in intelligence exists despite brain size and recent studies have shown that a simple unifactorial view of the neurobiology underpinning cognitive ability is probably unrealistic. Educational attainment (EA) is often used as a proxy for cognitive ability since it is easily measured, resulting in large sample sizes and, consequently, sufficient statistical power to detect small associations. This study investigates the association between three global (total surface area (TSA), intra-cranial volume (ICV) and average cortical thickness) and 34 regional cortical measures with educational attainment using a polygenic scoring (PGS) approach. Analyses were conducted on two independent target samples of young twin adults with neuroimaging data, from Australia (N = 1097) and the USA (N = 723), and found that higher EA-PGS were significantly associated with larger global brain size measures, ICV and TSA (R² = 0.006 and 0.016 respectively, p < 0.001) but not average thickness. At the regional level, we identified seven cortical regions—in the frontal and temporal lobes—that showed variation in surface area and average cortical thickness over-and-above the global effect. These regions have been robustly implicated in language, memory, visual recognition and cognitive processing. Additionally, we demonstrate that these identified brain regions partly mediate the association between EA-PGS and cognitive test performance. Altogether, these findings advance our understanding of the neurobiology that underpins educational attainment and cognitive ability, providing focus points for future research.
Article
Purpose The study sought to determine whether the onset of canonical vocalizations in children with cochlear implants (CIs) is related to speech perception skills and spoken vocabulary size at 24 months postactivation. Method The vocal development in 13 young CI recipients (implanted by their third birthdays; mean age at activation = 20.62 months, SD = 8.92 months) was examined at every 3-month interval during the first 2 years of CI use. All children were enrolled in auditory-oral intervention programs. Families of these children used spoken English only. To determine the onset of canonical syllables, the first 50 utterances from 20-min adult-child interactions were analyzed during each session. The onset timing was determined when at least 20% of utterances included canonical syllables. As children's outcomes, we examined their Lexical Neighborhood Test scores and vocabulary size at 24 months postactivation. Results Pearson correlation analysis showed that the onset timing of canonical syllables is significantly correlated with phonemic recognition skills and spoken vocabulary size at 24 months postactivation. Regression analyses also indicated that the onset timing of canonical syllables predicted phonemic recognition skills and spoken vocabulary size at 24 months postactivation. Conclusion Monitoring vocal advancement during the earliest periods following cochlear implantation could be valuable as an early indicator of auditory-driven language development in young children with CIs. It remains to be studied which factors improve vocal development for young CI recipients.
Chapter
Advances in brain-imaging techniques have enabled cognitive neuroscientists to investigate the emergence of cognition in relation to brain development. This line of work has led to the advancement of three views concerning the functional development of the human brain during infancy and childhood: (1) the maturational view, (2) the interactive specialization view, and (3) the skill-learning view. In this article, findings from imaging, neuropsychological, and behavioral studies on the development of face processing, working memory, long-term memory, and language are described in reference to these views. From these examples, it becomes clear that all three theories of functional development are valuable for our understanding of brain–behavior relations.
Chapter
This chapter explores on brain development, potential insults to normal development, and behavioral correlates. When the brain is ready to develop in a certain area, but the needed stimulus is not present, then normal development does not happen. We examine emerging studies that have demonstrated the impact of the external environment (experienced as stress) on genes, brain development, and behavior. Factors like maternal inflammation, poverty, poor nutrition, racism, toxic stress, and substance use have been shown to negatively impact brain development. If every system that served youth from the pediatrician’s office, schools, juvenile justice, families, health care, etc. considered factors that support healthy brain development, they could powerfully change and shape behavior.
Article
Full-text available
Cerebral activation was measured with positron emission tomography in ten human volunteers. The primary auditory cortex showed increased activity in response to noise bursts, whereas acoustically matched speech syllables activated secondary auditory cortices bilaterally. Instructions to make judgments about different attributes of the same speech signal resulted in activation of specific lateralized neural systems. Discrimination of phonetic structure led to increased activity in part of Broca's area of the left hemisphere, suggesting a role for articulatory recoding in phonetic perception. Processing changes in pitch produced activation of the right prefrontal cortex, consistent with the importance of right-hemisphere mechanisms in pitch perception.
Article
Full-text available
Differences among languages offer a way of studying the process of infant adaptation from broad initial capacities to language-specific phonetic production. We designed analyses of the distribution of consonantal place and manner categories in French, English, Japanese, and Swedish to determine (1) whether systematic differences can be found in the babbling and first words of infants from different language backgrounds, and, if so, (2) whether these differences are related to the phonetic structure of the language spoken in the environment. Five infants from each linguistic environment were recorded under similar conditions from babbling only to the production of 25 words in a session. Although all of the infants generally made greater use of labials, dentals, and stops than of other classes of sounds, a clear phonetic selection could already be discerned in babbling, leading to statistically significant differences among the groups. This selection can be seen to arise from phonetic patterns of the ambient language. Comparison of the babbling and infant word repertoires reveals differences reflecting the motoric consequences of sequencing constraints.
Article
Full-text available
Cerebral activation was measured with positron emission tomography in ten human volunteers. The primary auditory cortex showed increased activity in response to noise bursts, whereas acoustically matched speech syllables activated secondary auditory cortices bilaterally. Instructions to make judgments about different attributes of the same speech signal resulted in activation of specific lateralized neural systems. Discrimination of phonetic structure led to increased activity in part of Broca's area of the left hemisphere, suggesting a role for articulatory recoding in phonetic perception. Processing changes in pitch produced activation of the right prefrontal cortex, consistent with the importance of right-hemisphere mechanisms in pitch perception.
Article
Full-text available
To investigate mechanisms of audio-vocal interactions in the human brain, we studied the effect of speech output on modulation of neuronal activity in the auditory cortex. The modulation was assessed indirectly by measuring changes in cerebral blood flow (CBF) during unvoiced speech (whispering). Using positron emission tomography (PET), CBF was measured in eight volunteers as they uttered syllables at each of seven rates (30, 50, 70, 90, 110, 130 or 150/min) during each of the seven 60-s PET scans. Low-intensity white noise was used throughout scanning to mask auditory input contingent on the whispering. We found that, as a function of the increasing syllable rate, CBF increased in the left primary face area, the upper pons, the left planum temporale and the left posterior perisylvian cortex. The latter two regions contain secondary auditory cortex and previously have been implicated in the processing of speech sounds. We conclude that, in the absence of speech-contingent auditory input, the modulation of CBF in the auditory cortex is mediated by motor-to-sensory discharges. As such, it extends our previous findings of oculomotor corollary discharges to the audio-vocal domain.
Article
Full-text available
In studies of pitch processing, a fundamental question is whether shared neural mechanisms at higher cortical levels are engaged for pitch perception of linguistic and nonlinguistic auditory stimuli. Positron emission tomography (PET) was used in a crosslinguistic study to compare pitch processing in native speakers of two tone languages (that is, languages in which variations in pitch patterns are used to distinguish lexical meaning), Chinese and Thai, with those of English, a nontone language. Five subjects from each language group were scanned under three active tasks (tone, pitch, and consonant) that required focused-attention, speeded-response, auditory discrimination judgments, and one passive baseline as silence. Subjects were instructed to judge pitch patterns of Thai lexical tones in the tone condition; pitch patterns of nonspeech stimuli in the pitch condition; syllable-initial consonants in the consonant condition. Analysis was carried out by paired-image subtraction. When comparing the tone to the pitch task, only the Thai group showed significant activation in the left frontal operculum. Activation of the left frontal operculum in the Thai group suggests that phonological processing of suprasegmental as well as segmental units occurs in the vicinity of Broca's area. Baseline subtractions showed significant activation in the anterior insular region for the English and Chinese groups, but not Thai, providing further support for the existence of possibly two parallel, separate pathways projecting from the temporo-parietal to the frontal language area. More generally, these differential patterns of brain activation across language groups and tasks support the view that pitch patterns are processed at higher cortical levels in a top-down manner according to their linguistic function in a particular language.
Article
Full-text available
Functional magnetic resonance imaging was employed before and after six native English speakers completed lexical tone training as part of a program to learn Mandarin as a second language. Language-related areas including Broca's area, Wernicke's area, auditory cortex, and supplementary motor regions were active in all subjects before and after training and did not vary in average location. Across all subjects, improvements in performance were associated with an increase in the spatial extent of activation in left superior temporal gyrus (Brodmann's area 22, putative Wernicke's area), the emergence of activity in adjacent Brodmann's area 42, and the emergence of activity in right inferior frontal gyrus (Brodmann's area 44), a homologue of putative Broca's area. These findings demonstrate a form of enrichment plasticity in which the early cortical effects of learning a tone-based second language involve both expansion of preexisting language-related areas and recruitment of additional cortical regions specialized for functions similar to the new language functions.
Article
Full-text available
This chapter focuses on one of the first steps in comprehending spoken language: How do listeners extract the most fundamental linguistic elements-consonants and vowels, or the distinctive features which compose them-from the acoustic signal? We begin by describing three major theoretical perspectives on the perception of speech. Then we review several lines of research that are relevant to distinguishing these perspectives. The research topics surveyed include categorical perception, phonetic context effects, learning of speech and related nonspeech categories, and the relation between speech perception and production. Finally, we describe challenges facing each of the major theoretical perspectives on speech perception.
Article
Full-text available
Numerous findings suggest that non-native speech perception undergoes dramatic changes before the infant's first birthday. Yet the nature and cause of these changes remain uncertain. We evaluated the predictions of several theoretical accounts of developmental change in infants' perception of non-native consonant contrasts. Experiment 1 assessed English-learning infants' discrimination of three isiZulu distinctions that American adults had categorized and discriminated quite differently, consistent with the Perceptual Assimilation Model (PAM: Best, 1995; Best et al., 1988). All involved a distinction employing a single articulatory organ, in this case the larynx. Consistent with all theoretical accounts, 6-8 month olds discriminated all contrasts. However, 10-12 month olds performed more poorly on each, consistent with the Articulatory-Organ-matching hypothesis (AO) derived from PAM and Articulatory Phonology (Studdert-Kennedy & Goldstein, 2003), specifically that older infants should show a decline for non-native distinctions involving a single articulatory organ. However, the results may also be open to other interpretations. The converse AO hypothesis, that non-native between-organ distinctions will remain highly discriminable to older infants, was tested in Experiment 2, using a non-native Tigrinya distinction involving lips versus tongue tip. Both ages discriminated this between-organ contrast well, further supporting the AO hypothesis. Implications for theoretical accounts of infant speech perception are discussed.
Article
Full-text available
A category of stimuli of great importance for primates, humans in particular, is that formed by actions done by other individuals. If we want to survive, we must understand the actions of others. Furthermore, without action understanding, social organization is impossible. In the case of humans, there is another faculty that depends on the observation of others' actions: imitation learning. Unlike most species, we are able to learn by imitation, and this faculty is at the basis of human culture. In this review we present data on a neurophysiological mechanism--the mirror-neuron mechanism--that appears to play a fundamental role in both action understanding and imitation. We describe first the functional properties of mirror neurons in monkeys. We review next the characteristics of the mirror-neuron system in humans. We stress, in particular, those properties specific to the human mirror-neuron system that might explain the human capacity to learn by imitation. We conclude by discussing the relationship between the mirror-neuron system and language.
Article
Full-text available
The ability to discriminate phonetically similar speech sounds is evident quite early in development. However, inexperienced word learners do not always use this information in processing word meanings [Stager & Werker (1997). Nature, 388, 381-382]. The present study used event-related potentials (ERPs) to examine developmental changes from 14 to 20 months in brain activity important in processing phonetic detail in the context of meaningful words. ERPs were compared to three types of words: words whose meanings were known by the child (e.g., ''bear''), nonsense words that differed by an initial phoneme (e.g., ''gare''), and nonsense words that differed from the known words by more than one phoneme (e.g., ''kobe''). These results supported the behavioral findings suggesting that inexperienced word learners do not use information about phonetic detail when processing word meanings. For the 14-month-olds, ERPs to known words (e.g., ''bear'') differed from ERPs to phonetically dissimilar nonsense words (e.g., ''kobe''), but did not differ from ERPs to phonetically similar nonsense words (e.g., ''gare''), suggesting that known words and similar mispronunciations were processed as the same word. In contrast, for experienced word learners (i. e., 20-month-olds), ERPs to known words (e.g., ''bear'') differed from those to both types of nonsense words (''gare'' and ''kobe''). Changes in the lateral distribution of ERP differences to known and unknown (nonce) words between 14 and 20 months replicated previous findings. The findings suggested that vocabulary development is an important factor in the organization of neural systems linked to processing phonetic detail within the context of word comprehension.
Article
Full-text available
Linguistic experience alters an individual's perception of speech. We here provide evidence of the effects of language experience at the neural level from two magnetoencephalography (MEG) studies that compare adult American and Japanese listeners' phonetic processing. The experimental stimuli were American English /ra/ and /la/ syllables, phonemic in English but not in Japanese. In Experiment 1, the control stimuli were /ba/ and /wa/ syllables, phonemic in both languages; in Experiment 2, they were non-speech replicas of /ra/ and /la/. The behavioral and neuromagnetic results showed that Japanese listeners were less sensitive to the phonemic /r-l/ difference than American listeners. Furthermore, processing non-native speech sounds recruited significantly greater brain resources in both hemispheres and required a significantly longer period of brain activation in two regions, the superior temporal area and the inferior parietal area. The control stimuli showed no significant differences except that the duration effect in the superior temporal cortex also applied to the non-speech replicas. We argue that early exposure to a particular language produces a "neural commitment" to the acoustic properties of that language and that this neural commitment interferes with foreign language processing, making it less efficient.
Article
Full-text available
For a long time the cortical systems for language and actions were believed to be independent modules. However, as these systems are reciprocally connected with each other, information about language and actions might interact in distributed neuronal assemblies. A critical case is that of action words that are semantically related to different parts of the body (for example, 'lick', 'pick' and 'kick'): does the comprehension of these words specifically, rapidly and automatically activate the motor system in a somatotopic manner, and does their comprehension rely on activity in the action system?
Article
Previous work in which we compared English infants, English adults, and Hindi adults on their ability to discriminate two pairs of Hindi (non-English) speech contrasts has indicated that infants discriminate speech sounds according to phonetic category without prior specific language experience (Werker, Gilbert, Humphrey, & Tees, 1981), whereas adults and children as young as age 4 (Werker & Tees, in press), may lose this ability as a function of age and or linguistic experience. The present work was designed to (a) determine the generalizability of such a decline by comparing adult English, adult Salish, and English infant subjects on their perception of a new non-English (Salish) speech contrast, and (b) delineate the time course of the developmental decline in this ability. The results of these experiments replicate our original findings by showing that infants can discriminate non-native speech contrasts without relevant experience, and that there is a decline in this ability during ontogeny. Furthermore, data from both cross-sectional and longitudinal studies shows that this decline occurs within the first year of life, and that it is a function of specific language experience. © 2002 Published by Elsevier Science Inc.
Article
To investigate mechanisms of audio-vocal interactions in the human brain, we studied the effect of speech output on modulation of neuronal activity in the auditory cortex. The modulation was assessed indirectly by measuring changes in cerebral blood flow (CBF) during unvoiced speech (whispering). Using positron emission tomography (PET), CBF was measured in eight volunteers as they uttered syllables at each of seven rates (30, 50, 70, 90, 110, 130 or 150/min) during each of the seven 60–s PET scans. Low-intensity white noise was used throughout scanning to mask auditory input contingent on the whispering. We found that, as a function of the increasing syllable rate, CBF increased in the left primary face area, the upper pons, the left planum temporale and the left posterior perisylvian cortex. The latter two regions contain secondary auditory cortex and previously have been implicated in the processing of speech sounds. We conclude that, in the absence of speech-contingent auditory input, the modulation of CBF in the auditory cortex is mediated by motor-to-sensory discharges. As such, it extends our previous findings of oculomotor corollary discharges to the audio-vocal domain.
Article
In a series of four experiments, subjects studied a list of words and then tried to complete fragments of these words as well as fragments of nonstudied words. They also indicated on a rating scale whether the word specified by each fragment was a list word or not, regardless of whether they were able to complete the fragment and identify the word. When ratings for only those fragments that could not be completed were considered, a higher mean rating was obtained for list words than for lure words. This was the case with both two-letter and four-letter fragments, with both 10 s and 20 s allowed for working on each fragment, and with both Turkish and English words. Thus, a curious finding, a feeling-of-knowing for the inclusion of certain words in a previously studied list even when such words could not be identified, was demonstrated.
Article
Previous work in which we compared English infants, English adults, and Hindi adults on their ability to discriminate two pairs of Hindi (non-English) speech contrasts has indicated that infants discriminate speech sounds according to phonetic category without prior specific language experience (Werker, Gilbert, Humphrey, & Tees, 1981), whereas adults and children as young as age 4 (Werker & Tees, in press), may lose this ability as a function of age and or linguistic experience. The present work was designed to (a) determine the generalizability of such a decline by comparing adult English, adult Salish, and English infant subjects on their perception of a new non-English (Salish) speech contrast, and (b) delineate the time course of the developmental decline in this ability. The results of these experiments replicate our original findings by showing that infants can discriminate non-native speech contrasts without relevant experience, and that there is a decline in this ability during ontogeny. Furthermore, data from both cross-sectional and longitudinal studies shows that this decline occurs within the first year of life, and that it is a function of specific language experience.
Article
The everyday auditory environment consists of multiple simultaneously active sources with overlapping temporal and spectral acoustic properties. Despite the seemingly chaotic composite signal impinging on our ears, the resulting perception is of an orderly ‘auditory scene’ that is organized according to sources and auditory events, allowing us to select messages easily, recognize familiar sound patterns, and distinguish deviant or novel ones. Recent data suggest that these perceptual achievements are mainly based on processes of a cognitive nature (‘sensory intelligence’) in the auditory cortex. Even higher cognitive processes than previously thought, such as those that organize the auditory input, extract the common invariant patterns shared by a number of acoustically varying sounds, or anticipate the auditory events of the immediate future, occur at the level of sensory cortex (even when attention is not directed towards the sensory input).
Article
Infants 18 to 20 weeks old recognize the correspondence between auditorially and visually presented speech sounds, and the spectral information contained in the sounds is critical to the detection of these correspondences. Some infants imitated the sounds presented during the experiment. Both the ability to detect auditory-visual correspondences and the tendency to imitate may reflect the infant's knowledge of the relationship between audition and articulation.
Article
Infants' development of speech begins with a language-universal pattern of production that eventually becomes language specific. One mechanism contributing to this change is vocal imitation. The present study was undertaken to examine developmental change in infants' vocalizations in response to adults' vowels at 12, 16, and 20 weeks of age and test for vocal imitation. Two methodological aspects of the experiment are noteworthy: (a) three different vowel stimuli (/a/, /i/, and /u/) were videotaped and presented to infants by machine so that the adult model could not artifactually influence infant utterances, and (b) infants' vocalizations were analyzed both physically, using computerized spectrographic techniques, and perceptually by trained phoneticians who transcribed the utterances. The spectrographic analyses revealed a developmental change in the production of vowels. Infants' vowel categories become more separated in vowel space from 12 to 20 weeks of age. Moreover, vocal imitation was documented, infants listening to a particular vowel produced vocalizations resembling that vowel. A hypothesis is advanced extending Kuhl's native language magnet (NLM) model to encompass infants' speech production. It is hypothesized that infants listening to ambient language store perceptually derived representations of the speech sounds they hear which in turn serve as targets for the production of speech utterances. NLM unifies previous findings on the effects of ambient language experience on infants' speech perception and the findings reported here that short-term laboratory experience with speech is sufficient to influence infants' speech production.
Article
The locations of active brain areas can be estimated from the magnetic field the neural current sources produce. In this work we study a visualization method of magnetoencephalographic data that is based on minimum[symbol: see text] (1)-norm estimates. The method can represent several local or distributed sources and does not need explicit a priori information. We evaluated the performance of the method using simulation studies. In a situation resembling typical magnetoencephalographic measurement, the mean estimated source strength exceeded baseline level up to 2 cm from the simulated point-like source. The method can also visualize several sources, activated simultaneously or in a sequence, which we demonstrated by analyzing magnetic responses associated with sensory stimulation and a picture naming task.
Article
There are two widely divergent theories about the relation of speech to language. The more conventional view holds that the elements of speech are sounds that rely for their production and perception on two wholly separate processes, neither of which is distinctly linguistic. Accordingly, the primary motor and perceptual representations are inappropriate for linguistic purposes until a cognitive process of some sort has connected them to language and to each other. The less conventional theory takes the speech elements to be articulatory gestures that are the primary objects of both production and perception. Those gestures form a natural class that serves a linguistic function and no other. Therefore, their representations are immediately linguistic, requiring no cognitive intervention to make them appropriate for use by the other components of the language system. The unconventional view provides the more plausible answers to three important questions: (1) How was the necessary parity between speaker and listener established in evolution, and how maintained? (2) How does speech meet the special requirements that underlie its ability, unique among natural communication systems, to encode an indefinitely large number of meanings? (3) What biological properties of speech make it easier than the reading and writing of its alphabetic transcription?
Article
We recorded magnetic brain activity from healthy human newborns when they heard frequency changes in an otherwise repetitive sound stream. We were able to record the magnetic counterpart of the mismatch negativity (MMN) previously described only with electric recordings in infants. The results show that these recordings are possible, although still challenging due to the small head size and head movements. The modelling of the neural sources underlying the recorded responses suggests cortical sources in the temporal lobes.
Article
Previous research has indicated that 70-85% of women and girls show a bias to hold infants, or dolls, to the left side of their body. This bias is not matched in males (e.g. deChateau, Holmberg & Winberg, 1978; Todd, 1995). This study tests an explanation of cradling preferences in terms of hemispheric specialization for the perception of facial emotional expression. Thirty-two right-handed participants were given a behavioural test of lateralization and a cradling task. Females, but not males, who cradled a doll on the left side were found to have significantly higher laterality quotients than right cradlers. Results indicate that women cradle on the side of the body that is contralateral to the hemisphere dominant for face and emotion processing and suggest a possible explanation of gender differences in the incidence of cradling.
Article
Magnetic brain responses to speech sounds were measured in 10 healthy neonates. The stimulation consisted of a frequent vowel sound [a:] with a steady pitch contour, which was occasionally replaced by the vowel [i:] with a steady pitch, or the vowel [a:] with a rising pitch, manifesting a change of intonation. The magnetic mismatch-negativity response (MMNm) was obtained and successfully modelled to the speech sound quality change in all infants and to the intonation change in 6 infants. The present results indicate that auditory-cortex speech-sound discrimination may well be studied with magnetic recordings as early as in newborn infants.
Article
Magnetoencephalography (MEG) detects the brain's magnetic fields as generated by neuronal electric currents arising from synaptic ion flow. It is noninvasive, has excellent temporal resolution, and it can localize neuronal activity with good precision. For these reasons, many scientists interested in the localization of brain functions have turned to MEG. The technique, however, is not without its drawbacks. Those reluctant to employ it cite its relative awkwardness among pediatric populations because MEG requires subjects to be fairly still during experiments. Due to these methodological challenges, infant MEG studies are not commonly pursued. In the present study, MEG was employed to study auditory discrimination in infants. We had two goals: first, to determine whether reliable results could be obtained from infants despite their movements; and second, to improve MEG data analysis methods. To get more reliable results from infants we employed novel hardware (real-time head-position tracking system) and software (signal space separation method, SSS) solutions to better deal with noise and movement. With these solutions, the location and orientation of the head can be tracked in real time and we were able to reduce noise and artifacts originating outside the helmet significantly. In the present study, these new methods were used to study the biomagnetic equivalents of event-related potentials (ERPs) in response to duration changes in harmonic tones in sleeping, healthy, full-term newborns. Our findings indicate that with the use of these new analysis routines, MEG will prove to be a very useful and more accessible experimental technique among pediatric populations.
Article
Data on typically developing children suggest a link between social interaction and language learning, a finding of interest both to theories of language and theories of autism. In this study, we examined social and linguistic processing of speech in preschool children with autism spectrum disorder (ASD) and typically developing chronologically matched (TDCA) and mental age matched (TDMA) children. The social measure was an auditory preference test that pitted 'motherese' speech samples against non-speech analogs of the same signals. The linguistic measure was phonetic discrimination assessed with mismatch negativity (MMN), an event-related potential (ERP). As a group, children with ASD differed from controls by: (a) demonstrating a preference for the non-speech analog signals, and (b) failing to show a significant MMN in response to a syllable change. When ASD children were divided into subgroups based on auditory preference, and the ERP data reanalyzed, ASD children who preferred non-speech still failed to show an MMN, whereas ASD children who preferred motherese did not differ from the controls. The data support the hypothesis of an association between social and linguistic processing in children with ASD.
Article
Abstract Behavioral data establish a dramatic change in infants' phonetic perception between 6 and 12 months of age. Foreign-language phonetic discrimination significantly declines with increasing age. Using a longitudinal design, we examined the electrophysiological responses of 7- and 11-month-old American infants to native and non-native consonant contrasts. Analyses of the event-related potentials (ERP) of the group data at 7 and at 11 months of age demonstrated that infants' discriminatory ERP responses to the non-native contrast are present at 7 months of age but disappear by 11 months of age, consistent with the behavioral data reported in the literature. However, when the same infants were divided into subgroups based on individual ERP components, we found evidence that the infant brain remains sensitive to the non-native contrast at 11 months of age, showing differences in either the P150-250 or the N250-550 time window, depending upon the subgroup. Moreover, we observed an increase in infants' responsiveness to native language consonant contrasts over time. We describe distinct neural patterns in two groups of infants and suggest that their developmental differences may have an impact on language development.
Article
We report infant auditory event-related potentials to native and foreign contrasts. Foreign contrasts are discriminated at 11 months of age, showing significant differences between the standard and deviant over the positive (P150-250), or over the negative (N250-550) part of the waveform. The amplitudes of these deflections have different amplitude scalp distributions. Infants were followed up longitudinally at 18, 22, 25, 27 and 30 months for word production. The infant speech discriminatory P150-250 and N250-550 are different components with different implications for later language development.
Article
This study investigates by means of the event-related brain potential whether mechanisms of lexical priming and semantic integration are already developed in 14-month-olds. While looking at coloured pictures of known objects children were presented with basic-level words that were either congruous or incongruous to the pictures. The event-related potential of 14-month-olds revealed an early negativity in the lateral frontal brain region for congruous words, and a later N400-like negativity for incongruous words. These results indicate that both lexical priming and semantic integration are already present as early as 14 months.
Article
The mismatch negativity (MMN) response to auditory stimuli has been successfully recorded in newborns thus demonstrating the discriminative cognitive ability. The aim of our study was to determine whether and when such an MMN response could be detected in the human fetus. The recordings of weak magnetic fields from the fetal brain were performed with the 151 channel MEG system called SARA (SQUID Array for Reproductive Assessment). Two tone bursts were presented in a sequence of a standard complex tone of 500 Hz intermixed with a deviant complex tone of 750 Hz in 12% of the stimuli. Sound intensity delivered over the maternal abdomen was 110 dB. The interstimulus interval (ISI) varied between 500 ms and 1100 ms. Fetal response, corresponding to sound frequency change detection, was calculated from the records where responses to standard and deviant tones were observed. A successful response was found in 60% of 25 fetal recordings. The MMN response with an average latency of 321 ms was observed in 48% of the fetal data. In 12% of the fetal data, a late component, referred to as the late discriminative negativity (LDN) response, was detected with an average latency of 458 ms. The same paradigm was applied in 5 newborns after birth. The capability for sound discrimination is a prerequisite for normal speech development. The investigation of sound discrimination and related cortical activity of the fetus can help to identify and determine the nature of deficits caused by central processes in the auditory system at very early stages.
Developmental phonetic perception: native language magnet theory expanded (NLM-e)
  • Pk Kuhl
  • Conboy
  • Coffey
  • S Corina
  • D Padden
  • M Rivera-Gaxiola
  • Nelson
Kuhl PK, Conboy BT, Coffey-Corina S, Padden D, Rivera-Gaxiola M, Nelson T. Developmental phonetic perception: native language magnet theory expanded (NLM-e). Philos Trans R Soc (in press).
Lateralization of phonetic and pitch discrimination in speech processing Modulation of cerebral blood flow in the human auditory cortex during speech: role of motor-to-sensory discharges
  • Rj Zatorre
  • Ac Evans
  • E Meyer
  • Gjedde
  • Perry T Paus
  • Dw
  • Rj Zatorre
  • Kj Worsley
  • Evans
Zatorre RJ, Evans AC, Meyer E, Gjedde A. Lateralization of phonetic and pitch discrimination in speech processing. Science 1992; 256:846–849. 15. Paus T, Perry DW, Zatorre RJ, Worsley KJ, Evans AC. Modulation of cerebral blood flow in the human auditory cortex during speech: role of motor-to-sensory discharges. Eur J Neurosci 1996; 8:2236–2246.
Adaptation to language: evidence from babbling and first words in four languages
  • de Boysson-Bardies
Auditory magnetic responses of healthy newborns
  • Huotilainen