Developmental Neurocognition: Speech and Face Processing in the First Year of Life
Abstract
This volume contains the proceedings of a NATO Advanced Research Workshop (ARW) on the topic of "Changes in Speech and Face Processing in Infancy: A glimpse at Developmental Mechanisms of Cognition", which was held in Carry-Ie-Rouet (France) at the Vacanciel "La Calanque", from June 29 to July 3, 1992. For many years, developmental researchers have been systematically exploring what is concealed by the blooming and buzzing confusion (as William James described the infant's world). Much research has been carried out on the mechanisms by which organisms recognize and relate to their conspecifics, in particular with respect to language acquisition and face recognition. Given this background, it seems worthwhile to compare not only the conceptual advances made in these two domains, but also the methodological difficulties faced in each of them. In both domains, there is evidence of sophisticated abilities right from birth. Similarly, researchers in these domains have focused on whether the mechanisms underlying these early competences are modality-specific, object specific or otherwise.
Chapters (29)
There are at least 3 levels of experience involvement in the specification of connections in the nervous system; connections may be experience-independent, with no detectable influence of experience on their formation, experience-expectant, where synapses are generated in advance of the experience that selects a patterned subset of them for survival, and experience-dependent, the apparent form subserving adult learning and memory, where experience appears to trigger the formation of new connections. Molecular substrates of these three types of synapse formation are probably shared to a great extent, although, when synapses are triggered by experience, there must be regulators of any gene expression involved, structural proteins expressed that can bring about synapse formation, and, we argue, a local marker, that either initiates synaptogenesis or otherwise mediates the interval between nerve cell activity and the subsequent formation of a synapse. We discuss several candidates for these roles that we have studied.
The expansion of the cerebral hemispheres distinguishes higher mammals and provides the morphological substrate for the rich behavioral repertoire found in primates including man. In the present paper we aim to examine the factors which control the development of the cerebral cortex.
The human primary visual cortex quadruples in volume between 28 weeks of gestation and birth, and again by some 4 months of age, after which it remains stable through adulthood. The number of neurons does not vary significantly from mid-gestation to old age. Synaptic number increases rapidly from late gestation to a maximum at about 8 months postnatally. After this, there is a loss of some 40% of synapses so that by about 11 years of age the figure is again close to that at birth, and it varies little during adulthood. This decline corresponds to a parallel decline in the number of dendritic spines on pyramidal neurons.
The development of neurons immunoreactive to gamma-aminobutyric acid (GARA) in human foetal visual cortex was also studied. At 14 weeks GABA-positive cells occurred in all layers,but mostly in the marginal zone (MZ), subplate (SP), deep intermediate zone (IZ) and ventricular zone (VZ). The cortical plate (CP), which gives rise to most definitive adult cortical layers, had few GABAergic cells. At 20 weeks the density of GABA-positive neurons was highest in the definitive cortex, especially the deep layers (VI and V), and was lowest in IZ, where the white matter would form. The peak of GABA-positivity continued to move superficially, and was in layer IVc by 30 weeks. The laminar distribution stabilised from 30 weeks with three dense bands: in layer IVc and superficial V, layer IVa, and layers II and superficial III.
Inferior temporal (IT) cortex is known to be critical for visual pattern perception and recognition in adult monkeys, but not in infant ones. We found that cells in IT cortex ofinfant monkeys showed adult-like response properties, including form selectivity for complex patterns such as faces, as early as the second month of life. However, the neural responses were weaker and of longer latency, and moreover were virtually absent under anesthesia below about four months of age. Anatomical inputs from visual cortical regions were adult like early in infancy, whereas connectivity patterns with non-visual cortical regions appeared to have a more protracted period of maturation.
A review of current and potential imaging techniques that can be used to map human brain activity is presented. The most advanced method, i.e. positron emission tomography (PET) and repeated cerebral blood flow (CBF) imaging with oxygen-15 labelled water is then further detailed, including recent advances and current unsolved questions in the acquisition and analysis of such data. The major results obtained using this technique in adults in the field of speech and face processing are then discussed. Finally, the feasibility of applying some of these imaging techniques to the study of cognitive functions in infancy is discussed.
Evidence from preference studies suggests that infants can discriminate face and non-face patterns and usually prefer to look longer at face-like patterns. This face preference is present at birth. Recognition memory studies demonstrate that the learning curve for face and non-face patterns differ. On the basis of this and other evidence, some have suggested that faces represent an ecologically privileged class of stimuli and that there is a qualitative difference between the recognition and identification of face and non-face patterns.
Others note that the recognition of faces requires many generalized abilities and suggest it is not qualitatively different from the perception of non-face patterns. For example, the very young infant’s preference for face-like patterns is reduced when face and non-face patterns are equated for visibility (based on amplitude spectra). Most neonates treat faces as abstract patterns and base their preferences for patterns on visibility. Categorization of abstracts and faces uses the same mechanism and processes. Increased experience with faces leads to different categorizations for faces than for abstracts, based on different cues.
A re-analysis of the published literature and new data of my own support the hypothesis that babies confuse the input from different senses. That synesthetic mixing leads to (1) apparent cross-modal matching, which becomes more difficult to demonstrate with increasing age; (2) responses from primary cortical areas to input from the “wrong” senses; (3) shifting visual preferences; and (4) the summation of sensory inputs in determining the baby’s sleep. The hypothesis of neonatal synesthesia has major implications for studies of young infants and may explain some of the inconsistencies in the literature.
The present studies demonstrate that the right hemisphere plays an important role in the processing of individual faces early in life. Face processing between the age of 4 and 9 months seems to be linked to configural processing. The right hemisphere processes the configurai aspects of patterns and faces, while the left hemisphere processes local aspects. The developmental story of face processing cannot however be simply a part of the developmental story of configurai processing, since a difference in lateralization has been observed between female and male populations in the configurai processing of faces but not geometrical patterns. This difference between the ways in which the two hemispheres represent the visual world is present at an age when no transfer of this information once acquired is possible from one hemisphere to the other. The conjecture is examined that a difference between the maturation rates of some portions of the right and left hemispheres may be one possible factor contributing to the functional differences observed. The preliminary results of a PET scan study performed on 2-month old infants are not incompatible with this conjecture.
It is proposed that prenatal exposure to maternal speech in concert with different rates of development of the two hemispheres results in a left hemisphere specialization for speech by the time of birth. This specialization together with the tendency for adults to speak to infants as they approach, results in a right hemisphere specialization for faces. It is further proposed that the infants poor resolution of middle and high spatial frequencies constrains the initial right hemisphere processing to one in which configuration of the face rather than specific features are attended to. Improvements in visual functioning lead to a left hemisphere processing of specific features. It is therefore suggested that the characteristic right hemisphere mode of holistic processing and left hemisphere mode of analytic processing derive from early face, voice processing.
In this essay I consider the way in which voice recognition influences the processing of facial information which in turn contributes to the development of multiple modes of information processing in the adult.
The position which I advance is based upon the view that cognitive styles are the outcome of timing relationships between components from many domains developing at different rates. The different rates of change results in dynamic changes in the relationships between components and produces changes in the organization of information processing. The components which I will consider are ecological, neurological, sensory and social. It is my contention that there are developmentally unique aspects to each of these components which are fundamental to the shaping of cognition. Among these developmentally unique characteristics are a highly constrained intrauterine environment, and initially limited but changing sensory capacities. I will indicate how these can function to give voice and face processing a unique ontogenetic role.
Results from studies using the Still-Face procedure showed that 3–6-month-olds respond to dynamic faces in face-to-face interactions, but not to changes in adult voice, touch or contingency in both live and televised interactions. Infant visual attention distinguished between normal and still-face periods, while smiling distinguished people from objects, and upright from inverted faces. Results from other paradigms showed that the adult voice and touch can affect infant responding and infants are sensitive to contingency. A complete description of infant’s perceptual capacities requires the use of multiple response measures and consideration of the experimental demands.
Frontal lobe activity in human infants during the second half of the first year of life was examined using the ongoing electroencephalogram. Changes in frontal EEG activity were linked to both cognitive and emotional changes that occur during that developmental period. In one series of studies we found that the pattern of asymmetrical activation in the frontal EEG was related to an infant’s temperamental disposition. Infants exhibiting greater relative right frontal activation were more likely to cry to maternal separation and to exhibit anxiety and fear in the laboratory. In a second series of studies we found that changes in performance on certain cognitive tasks was a function of frontal EEG maturation. These maturational changes in frontal activity and cognitive performance were a function of infant locomotor experience.
There is more to faces than meets the eye. Infants can see the faces of others, but can also feel their own faces move. We propose a cross-modal hypothesis about why faces are attractive and meaningful to infants. According to this view, faces are attention-getting in part because they look like infants’ own felt experiences. This cross-modal correspondence drives not only visual attention but also action. Infants produce facial acts they see others perform. We here report an experiment on the efficacy of mothers versus strangers in eliciting facial imitation. The development of imitation is also investigated. The results show that there is no disappearance or “drop out” of imitation in early infancy; however, infants develop social expectations about face-to-face interaction that sometimes supersede imitation. Special procedures are required to motivate imitative responding in the 2- to 3-month age range. A theory is proposed about the motivation and functional significance of early facial imitation. According to this theory early imitation subserves a social identity function. Infants treat the facial behaviors of people as identifiers of who they are and use imitative reenactments as a means of verifying the identity of people. Facial imitation and the neural bases of the multimodal representation of faces provide interesting problems in developmental cognitive neuroscience.
This paper links the view that speech perception capacities develop as a result of innately guided learning processes to earlier conceptions in the language learning literature concerning a Language Acquisition Device (LAD). Consistent with this view, several recent studies demonstrating that infants acquire considerable knowledge about the sound structure of their native language during the first year of life are reviewed. Some implications of considering an LAD as a link between developing speech perception and production capacities are discussed.
Data from three experimental sources are reviewed in this chapter. They indicate: (a) that maternal and external voices travel to fetal head level, (b) the near term fetus perceives and discriminates speech signals, and (c) that he/she may learn some features of speech sounds to which he/she was exposed during the last trimester of the gestation and remember them post-natally.
In the course of language acquisition, one fundamental operation is the extraction of linguistically relevant units from the continuous speech signal. Prior to experience with a particular language, what kind of units is the initial perceptual system able to extract and represent? By exploring how neonates perform discrimination and categorization tasks when presented with strings of units used in different languages, we attempt to understand the nature of their primary speech representations. We ask how universal are the earliest patterns of perception, and how do they converge onto language specific units.
Developmental theories of face perception and speech perception have similar goals. Theorists in both domains seek to explain infants’ early sophistication with regard to the detection and/or discrimination of facial and speech stimuli and to determine whether infants’ early abilities are due to mechanisms dedicated to the processing of specific biologically relevant stimuli or more general sensory/cognitive mechanisms. In addition, theorists in both domains seek to explain how experience with specific faces and speech sounds modifies infants’ perception. In this chapter, studies showing enhanced discriminability at phonetic boundaries, as well as studies on the perception of phonetic prototypes, exceptionally good instances representing the centers of phonetic categories, are described. The studies show that although phonetic boundary effects are common to monkey and man, prototype effects are not. For human listeners prototypes play a unique role in speech perception. They function like “perceptual magnets, ” attracting nearby members of the category. By 6 months of age the prototype’s perceptual magnet effect is language-specific. Exposure to a specific language thus alters infants’ perception prior to the acquisition of word meaning and linguistic contrast. These results support a new theory, the Native Language Magnet (NLM) theory, which describes how innate factors and early experience with language interact in the development of speech perception.
This chapter compares recent findings on non-native vowel perception to previous research on non-native consonant discrimination. Previous research examining discrimination on non-native consonant contrasts revealed reliable and replicable influences from the native language by 10–12 months, but recent research on vowel perception has revealed an effect of specific language experience by 6-months of age as revealed in a language-specific perceptual magnet effect. The similarities and differences between these two bodies of research are considered, and recent data from our lab that allows a synthesis of the two is presented. These data confirm the influence from the native language on vowel perception by 6-months of age, but show that further changes occur during the second half of the first year of life. Additional questions raised by these new findings are posed.
Adults have difficulty discriminating many non-native speech contrasts, yet young infants discriminate both native and non-native contrasts. Language-specific constraints appear by 10–12 months. Evidence presented here suggests that mature listeners’ discrimination is constrained by perceived similarities between non-native sounds and native categories, and that this native language influence may not be fully developed at 10–12 months. The findings suggest that young infants have broadly-tuned perception of phonetic details. Next, they begin to discern equivalence classes that roughly correspond to native phonemes. Perception of phonological contrasts, however, depends on recognition of their linguistic function, and thus develops later. But what sort of information in speech forms the basis for perception of equivalence classes or phonemic contrasts? I argue that distal articulatory gestures, rather than proximal auditory-acoustic cues or abstract phonetic features, are the primitives both for adults’ perceptual assimilations of non-native phones and for infants’ emerging recognition of native categories.
How the human infant comes to possess an interest in vocal behavior, and how this interest encourages development of the capacity for spoken language, have not been recognized as important questions in developmental psycholinguistics. If we are to explain the development of linguistic capacity, we must account for the infant’s attention to things people do while talking. This perceptual orientation starts infants down a developmental growth path which leads to spoken language. Like other primates, humans have a neural specialization for social cognition, as evidenced by behavioral reactions to affective facial and vocal displays and dispositions to participate in interactions where these displays are present. But human mothers and infants also have a rich potential for interaction that includes vocal turn taking, mutual gaze, pointing, and vocal accommodation, as well as a well developed appreciation for the emotional and mental lives of others. Additionally, our species has a neural specialization for grammatical analysis and computation that nonhuman primates lack. The biolinguistic approach to language development taken here sees parallels between the evolution of linguistic capacity in the species and the emergence of that capacity in infancy.
A model is described for early speech pattern representation that combines sensory processing, vocal motor control, and emergent phonological organization. A central hypothesis is that syllables and syllable-based rhythmic patterns induce a proto-linguistic representation compatible with certain constructs of nonlinear phonology. The syllable is defined in terms of sonority theory and aspects of rhythmic patterning. Implications of this model are discussed for intrasyllabic organization and the language-learning phenomenon of fast mapping. In addition, an analytic model that combines autosegmental phonology with ethologic descriptions of infants’ vocal behavior is used to develop metrics of infant vocal productivity and sound diversity.
From the onset of canonical babbling, human vocal output is dominated by the cyclical open-close alternation of the mandible. Mandibular cyclicity has a long evolutionary history in sucking, licking and chewing in mammals, and also appears communicatively in lipsmacks, tonguesmacks and teeth chatters in other primates. It is argued that many of the articulatory regularities in the sound patterns of babbling, and early speech, which closely resembles babbling, (including consonants, vowels, syllables, and many of their detailed attributes) can be attributed directly to properties of this basic mandibular cycle. In addition, some interarticulator synergies evolving with the cycle, plus developmental limitations in changing locus of control between and within articulators during utterances, seem responsible for most other regularities in babbling and early speech.
Cross linguistic analyses of syllables in dissyllabic productions of infants from four different linguistic communities were used to test the role of the perceptual and selective factors in the early organisation of infants’ vocal productions. The differences in the V1V2 height relations and the favored co-ocurrences in CV associations closely reflect the language-specific characteristics exhibited by the dissyllabic words infants will utter some months later. These results support the Interaction Hypothesis which claims that early perceptual experience with language already shaped the phonetic and syllabic organization of 10–12 months old infants’ vocal productions.
A key challenge in the study of early language ontogeny is to discover when and how human language acquisition begins. Here, I attempt to move beyond dichotomous nature-nurture explanations of this process in my pursuit of the mechanisms underlying early language ontogeny. I do this by examining early language acquisition from a different perspective: I compare and contrast spoken and signed language acquisition. Then, based on the four sets of findings summarized below, I formulate a testable theory about the mechanisms that underlie early language acquisition, as well as the specific features of the environmental input, that together make possible human language acquisition. I further propose a new way to construe language ontogeny. Specifically, I advance the hypothesis that speech, per se, is not critical to language acquisition. Instead, I propose that the specific distributional patterns, or structures, encoded in the input — not the specific modality — are the critical input features necessary to enable very early acquisition to begin and to be maintained in our species from birth. A discussion relating the present findings to hypotheses about language phylogeny is also provided.
The reduplicative babbling of five French- and five English-learning infants, recorded when the infants were between the ages of 7;3 months and 11;1 months on average, was examined for evidence of language-specific prosodic patterns. Certain fundamental frequency and syllable-timing patterns in the infants’ utterances clearly reflected the influence of the ambient language. The evidence for language-specific influence on syllable amplitudes was less clear. The results are discussed in terms of a possible order of acquisition for the prosodic features of fundamental frequency, timing, and amplitude.
The origins of phonological system are traced back to the first global ambient language influences, both auditory and visual, on infant vocal production. Vocal exploration allows the child to develop kinesthetic-auditory links as well as articulatory control. The child’s vocal motor schemes provide a perceptual match to selected salient adult words and form the basis for the first identifiable word productions. These vocal motor schemes, shaped by the phonetic affordances of the native language, provide the raw material out of which the child forges an incipient phonological system. Whereas the tightly constrained first words are relatively accurate but scarcely interrelated, gradual development of one or more word schemata leads to rapid lexical advances, less constrained selection, and more radical adaptation of adult forms to increasingly stable child templates. Non-linearity or “regression” in production accuracy marks emergent organization.
Children learning to pronounce the sounds of their languages exhibit individual differences and a varying but often high degree of regularity in their rendering of adult words. The mappings from adult to child words show great phonetic context dependency and considerable stability over time, but often much lexical irregularity and other ‘unruly behavior’. Standard child phonology models, derived from adult-based phonological theory, ignore such unruly phenomena as non-rule-governed template matching, crosstalk between rules, and fuzzy boundaries of rule domains. Connectionist learning models are in principle well adapted to simulating these properties; a model in progress, GEYKO, is sketched. Children have access to several feedback loops in learning to pronounce, both internal (auditory, motor, proprioceptive) and external (parental social and material reinforcement). GEYKO’s speech gesture planning module is intended to learn production with the aid of such feedback, and its auditory planning module will learn perceptual categorization of spectral data by unsupervised learning methods.
By most accounts, development over the child’s first year is in terms of representational units of a segment and/or syllable size. In this paper, I argue that the first phonological unit of the second year is the prosodic word. This constituent is structured by harmony or melody templates with planar segregation of consonants and vowels. The properties of the templates are the result of the interaction of four parameters. Reorganization at the end of the second year signals the replacement of the prosodic word by the segment and feature as central organizing units. This analysis provides a unified account of previously unrelated acquisition paths and makes clear the relationship between acquisition during the second year and universal constraints on the adult (end state) grammar, but leaves us with a puzzle about the status of the syllable as a representational unit and a possible nonlinearity between the first and second years.
The paper presents evidence for the claim that syntactic processes which are fast, automatic and informationally encapsulated in the adult language system only gradually gain their modular status during development. Findings from a series of behavioral studies demonstrate that the processing status of closed class elements changes is a function of age. It is proposed that this behavioral change is accompanied by a change in the functional brain topography. Results from studies measuring event related brain potentials suggest that it may be the anterior parts of the left hemisphere which subserve these automatic syntactic processes in particular.
Both face and voice carry information about an individual’s identity and emotional state. But the visual and auditory channels conveying this information are largely independent. Apart from a speaker’s sex and age, we cannot reliably pair the identities of face and voice; and the observation that a speaker’s face and voice may express quite different emotions is commonplace. Not surprisingly, then, studies of voice and face recognition, or of vocal and facial affect, are typically carried out by different people in different laboratories. Lipreading, by contrast, is typically studied by people who also study speech perception. The reason for this is simply that the two signals, optic and acoustic, that carry the phonetic message, are not independent: they both arise from the same physical source, the speaker’s articulations.
... D'une manière générale, les études citées plus haut (e.g., de Boysson-Bardies et trouvent que la variabilité intra-groupe diminue spectaculairement entre le stade 0-mot et le stade 25-mots. Il semble que les distributions de voyelles aussi reflètent la même tendance à la stabilisation vers un état "adulte", mais peu de données sont disponibles à ce sujet (voir cependant Hallé, 1991;de Boysson-Bardies, 1993 n'existe pas en français, mais c'est la tendance opposée qui tend à l'emporter. Ceci est indiqué par les scores d'affinité 3 du Tableau 2 : la situation est très contrastée pour les adultes. ...
... Par exemple, les mots du français sont nettement plus polysyllabiques que ceux de l'anglais. On retrouve cette différence dans le babillage où les pourcentages de dissyllabes de type CVCV sont environ 79% contre 67% pour les enfants français et américains respectivement(de Boysson-Bardies, 1993; voir aussi Levitt et al., 1992). En yoruba, les mots commençant par une voyelle sont très nombreux, et ceci se reflète dans le babillage des enfants yoruba où l'on trouve 52% de dissyllabes VCV contre 48% de dissyllabes CVCV(de Boysson-Bardies, 1993). ...
... On retrouve cette différence dans le babillage où les pourcentages de dissyllabes de type CVCV sont environ 79% contre 67% pour les enfants français et américains respectivement(de Boysson-Bardies, 1993; voir aussi Levitt et al., 1992). En yoruba, les mots commençant par une voyelle sont très nombreux, et ceci se reflète dans le babillage des enfants yoruba où l'on trouve 52% de dissyllabes VCV contre 48% de dissyllabes CVCV(de Boysson-Bardies, 1993). La structure VCV est par contre nettement minoritaire en français. ...
... While still in the mother's womb, an infant is able to hear her voice and discriminate sounds. Studies have shown that both the mother's voice and the voices of others are heard in utero, and that even prosodic features of speech are perceived (5) . Other research showed that newborns preferred to hear a story their mother had read aloud six weeks prior to birth, than one they had never heard. ...
... Other research showed that newborns preferred to hear a story their mother had read aloud six weeks prior to birth, than one they had never heard. Prenatal learning of some acoustic features of the story, probably prosodic, may explain this preference (5) . ...
Purpose:
To assess the potential association between psychological risk and limited auditory pathway maturation.
Methods:
In this longitudinal cohort study, 54 infants (31 non-risk and 23 at-risk) were assessed from age 1 to 12 months. All had normal hearing and underwent assessment of auditory maturation through cortical auditory evoked potentials testing. Psychological risk was assessed with the Child Development Risk Indicators (CDRIs) and PREAUT signs. A variety of statistical methods were used for analysis of results.
Results:
Analysis of P1 and N1 latencies showed that responses were similar in the both groups. Statistically significant differences between-groups were observed only for the variables N1 latency and amplitude at 1 month. Significant maturation occurred in both groups (p<0.05). There was moderate correlation between P1 latency and Phase II CDRIs, which demonstrates that children with longer latencies at age 12 months were more likely to exhibit absence of these indicators in Phase II and, therefore, were at greater psychological risk. The Phase II CDRIs also correlated moderately with P1 and N1 latencies at 6 months and N1 latencies at 1 month; again, children with longer latency were at increased risk.
Conclusion:
Less auditory pathway maturation correlated with presence of psychological risk. Problems in the mother-infant relationship during the first 6 months of life are detrimental not only to cognitive development, but also to hearing. A fragile relationship may reflect decreased auditory and linguistic stimulation.
... Б. Буасон-Барді також наводить дослідження періоду появи мовленнєвої імітації у дітей різних національностей. Було виявлено, що незалежно від культури і національності вміння імітувати мову та диференціювати голосні та приголосні звуки вже добре проявляється ближче до однорічного віку [10]. ...
... One of the main similarities is the predominance of open syllables (see Lahrouchi & Kern p.498 for review) and more specifically of the consonant-vowel (CV) syllable structure. Its very high prevalence is widely attested in infants learning languages such as English (Davis & Macneilage, 1994;Fagan, 2009;Kent & Bauer, 1985), Spanish (Oller & Eilers, 1982), Russian (Weir, 1966), Tashlhiyt (Lahrouchi & Kern, 2018), Malayalam (Roy & Sreedevi, 2013), French, Swedish and Yoruba (Boysson-Bardies, 1993), Swahili, Quechua, Estonian, Hebrew, Japanese, German, Maori (MacNeilage et al., 2000), or Tunisian, Flemish and Romanian (Kern & Davis, 2009). ...
Cross-linguistic studies describing the syllabic structures of babbling productions agree on the high prevalence of the CV structure, but few have addressed the other types of syllables emerging during this pre-linguistic stage. However, studying the evolution of the distribution of syllabic structures during babbling would make it possible to test both the influence of motor constraints and the influence of the perceptually based patterns from the infant’s language environmental input on the production of early syllables. A monthly follow-up of 22 French infants from 8 to 14 months showed that the distribution CV>V> CCV>CVC>VC was shared by the majority of infants in the sample and remained the same throughout the observation period. The comparison of the frequencies of the structures observed with those attested in adult-French and in 4 other languages (Dutch, Korean, Moroccan Arabic and Tunisian Arabic) revealed significant differences between all adult samples and infant productions. The results have implications for understanding the nature of factors impacting syllable production at the babbling stage. We discuss the possibility that the target language does not affect the production of babbled syllables.
... An important component of early language learning is the link between heard acoustic patterns of speech and produced speech. This is shown by the fact that, even early on, infants begin to produce babble that is unique to the phonemes in their native language [3]. ...
Since infancy humans can learn from social context to associate words with their meanings, for example associating names with objects. The open-question is which computational framework could replicate the abilities of toddlers in developing language and its meaning in robots. We propose a computational framework in this paper to be implemented on a robotics platform to replicate the early learning process of humans for the specific task of word-object mapping.
... Des travaux ont montré l'importance, dans la période qui pré-cède celle des premiers mots, du babillage varié (Ferguson & Farwell, 1975 ;McCune & Vihman, 1987 ;Vihman, DePaolis & Keren-Portnoy, 2009 ;Menn & Stoel-Gammon, 2005 ;Oller & Eilers, 1988) dans la transition vers les premiers mots. En effet, ce babillage varié fournit certains patrons syllabiques VCV et CVCV (Boysson-Bardies, 1993) qui seront ceux des premiers mots. Selon McCune et Vihman (1987), il n'est pas particulièrement question de contraintes dans le cadre de l'OT, mais plutôt de l'existence de filtres articulatoires permettant une restitution avant tout des schémas les plus habituels et les moins contraignants apparus pendant le babillage, appelés Vocal Motor Schemes. ...
... This languagegeneral to language-specific ability of speech perception has been observed at both segmental (Kuhl, Conboy, Coffey-Corina, Padden, Rivera-Gaxiola & Nelson, 2005;Werker & Polka, 1993;Werker & Tees, 1984) and suprasegmental levels (Höhle, Bijeljac-Babic, Herold, Weissenborn & Nazzi, 2009;Jusczyk, Cutler & Redanz, 1993;Mattock & Burnham, 2006;Pons & Bosch, 2010;Sato, Sogabe & Mazuka, 2010;Skoruppa, Pons, Bosch, Christophe, Cabrol & Peperkamp, 2013). Regarding speech production, numerous studies argued that languagespecific speech production does not occur until later, as it requires a learned mapping between perception and production based on the perceptual patterns stored in memory (de Boysson-Bardies, 1993;Imada, Zhang, Cheour, Taulu, Ahonen & Kuhl, 2006;Kehoe, Stoel-Gammon & Buder, 1995;Kuhl, Conboy, Coffey-Corina, Padden, Rivera-Gaxiola & Nelson, 2008;Kuhl & Meltzoff, 1996), although there is also evidence of early emergence of language-specific speech patterns in neonates' cries (Mampe, Friederici, Christophe & Wermke, 2009). ...
This study investigates Spanish heritage speakers' perception and production of Spanish lexical stress. Stress minimal pairs in various prosodic contexts were used to examine whether heritage speakers successfully identify the stress location despite varying suprasegmental cues (Experiment 1) and whether they use these cues in their production (Experiment 2). Heritage speakers' performance was compared to that of Spanish monolinguals and English L2 learners. In Experiment 1, the heritage speakers showed a clear advantage over the L2 learners and their performance was comparable to that of the monolinguals. In Experiment 2, both the heritage speakers and the L2 learners showed deviating patterns from the monolinguals; they produced a large overlap between paroxytones and oxytones, especially in duration. The discrepancy between heritage speakers' perception and production suggests that, while early exposure to heritage language is beneficial for the perception of heritage language speech sounds, this factor alone does not guarantee target-like production.
... Il semble que certains ré seaux spé cifiques du cortex visuel soient fonctionnels dè s la naissance (De Schonen, 2009, De Schonen et Deruelle, 1991, De Schonen et al., 1994 puisque plusieurs é tudes montrent que le nouveau-né peut apprendre à diffé rencier le visage de sa mè re de celui d'une é trangè re (Buschnell, Saï, & Mullin, 1989 ;Pascalis, De Schonen, Morton, Deruelle, & Fabre-Grenet, 1995). Dè s l'âge de six à huit semaines, l'enfant reconnaît le visage maternel, même lorsque sa mè re et l'é trangè re portent un foulard (Morton, 1993). À partir de l'âge de quatre mois, l'enfant reconnaît sur photographie le visage de sa mè re, même si tout le contour de son visage est masqué par un foulard (Pascalis et al., 1995). ...
Inside precocious relationships between infant and mother's face, the object concerned is not yet a metapsychological object. This visual object, highly specialized, is a real “attractor” for the newborn, a “salient” source-pattern carrying attachment. Indeed, we situate the relationship to this original object first into neuro-psychological mechanisms, from the involvement of pulsions between the subject to the object will be set up. The semiophysic of Thom allows us to modelize this way from a visual perceptive object to an object of identification, trough a multidisciplinary approach. The contribution of semiophysic will enable to conciliate, at last, the theory of pulsions and the theory of attachment.
... Il semble que certains ré seaux spé cifiques du cortex visuel soient fonctionnels dè s la naissance (De Schonen, 2009, De Schonen et Deruelle, 1991, De Schonen et al., 1994 puisque plusieurs é tudes montrent que le nouveau-né peut apprendre à diffé rencier le visage de sa mè re de celui d'une é trangè re (Buschnell, Saï, & Mullin, 1989 ;Pascalis, De Schonen, Morton, Deruelle, & Fabre-Grenet, 1995). Dè s l'âge de six à huit semaines, l'enfant reconnaît le visage maternel, même lorsque sa mè re et l'é trangè re portent un foulard (Morton, 1993). À partir de l'âge de quatre mois, l'enfant reconnaît sur photographie le visage de sa mè re, même si tout le contour de son visage est masqué par un foulard (Pascalis et al., 1995). ...
https://www.sciencedirect.com/science/article/pii/S254236061830074X
Inside precocious relationships between infant and mother's face, the object concerned is not yet a metapsychological object. This visual object, highly specialized, is a real “attractor” for the newborn, a “salient” source-pattern carrying attachment. Indeed, we situate the relationship to this original object first into neuro-psychological mechanisms, from the involvement of pulsions between the subject to the object will be set up. The semiophysic of Thom allows us to modelize this way from a visual perceptive object to an object of identification, trough a multidisciplinary approach. The contribution of semiophysic will enable to conciliate, at last, the theory of pulsions and the theory of attachment.
Mots clés
Visage maternel Bases neurales
Sémiophysique Attachement Identification
Keywords
Mother's face Neural basis Semiophysic Attachment Identification
... Furthermore, although many specialists consider that infraphonological competence (which precedes babbling) is not yet language specific [4] [16], it is worth investigating whether specific linguistic signatures might be woven into pre-babbling vocalization, thus facilitating its 'intelligibility' for adults. ...
... Studies on infants' perception and production of speech support that language-specific patterns emerge in speech perception prior to speech production (deBoysson- Bardies, 1993;Imada et al., 2006;Kuhl & Meltzoff, 1996;Kuhl, et al., 1992;Kuhl et al., 2006;Kuhl et al., 2008;Polka & Werker,1994). According to the Native Language Magnet theory, expanded (NLM-e) (Kuhl et al., 2008), infants store perceptual information of speech sounds during the early months of life, when production is still primitive and highly variable. ...
... Very early life experience-dependent bias can even eliminate "innate" abilities as manifest in, for example, the finding that babies lose their ability to perceive multiple phonemic cues (sounds used in languages) that are irrelevant to their language experience before they attain 1 year of age (Eimas 1975;Eimas et al. 1971;Lasky et al. 1975;Werker & Lalonde 1988). On the other hand, by 10 months of age, language-specific differences can be discerned in the babbling of infants raised in different countries (de Boysson-Bardies 1993). The main question is not of innateness of perception or action, but rather how infants learn and form selective phonemic categories that make a difference in their language so early in life (Kuhl 2010). ...
Leibovich et al. opened up an important discussion on the nature and origins of numerosity perception. The authors rightly point out that non-numerical features of stimuli influence this ability. Despite these biases, there is evidence that from birth, humans perceive and represent numerosities, and not just non-numerical quantitative features such as item size, density, and convex hull.
... Very early life experience-dependent bias can even eliminate "innate" abilities as manifest in, for example, the finding that babies lose their ability to perceive multiple phonemic cues (sounds used in languages) that are irrelevant to their language experience before they attain 1 year of age (Eimas 1975;Eimas et al. 1971;Lasky et al. 1975;Werker & Lalonde 1988). On the other hand, by 10 months of age, language-specific differences can be discerned in the babbling of infants raised in different countries (de Boysson-Bardies 1993). The main question is not of innateness of perception or action, but rather how infants learn and form selective phonemic categories that make a difference in their language so early in life (Kuhl 2010). ...
Leibovich et al. challenge the prevailing view that non-symbolic number sense (e.g., sensing number the same way one might sense color) is innate, that detection of numerosity is distinct from detection of continuous magnitude. In the present commentary, the authors' viewpoint is discussed in light of the integrative theory of numerical development along with implications for understanding mathematics disabilities.
... Very early life experience-dependent bias can even eliminate "innate" abilities as manifest in, for example, the finding that babies lose their ability to perceive multiple phonemic cues (sounds used in languages) that are irrelevant to their language experience before they attain 1 year of age (Eimas 1975;Eimas et al. 1971;Lasky et al. 1975;Werker & Lalonde 1988). On the other hand, by 10 months of age, language-specific differences can be discerned in the babbling of infants raised in different countries (de Boysson-Bardies 1993). The main question is not of innateness of perception or action, but rather how infants learn and form selective phonemic categories that make a difference in their language so early in life (Kuhl 2010). ...
The conclusions reached by Leibovich et al. urge the field to regroup and consider new ways of conceptualizing quantitative development. We suggest three potential directions for new research that follow from the authors' extensive review, as well as building on the common ground we can take from decades of research in this area.
... Very early life experience-dependent bias can even eliminate "innate" abilities as manifest in, for example, the finding that babies lose their ability to perceive multiple phonemic cues (sounds used in languages) that are irrelevant to their language experience before they attain 1 year of age (Eimas 1975;Eimas et al. 1971;Lasky et al. 1975;Werker & Lalonde 1988). On the other hand, by 10 months of age, language-specific differences can be discerned in the babbling of infants raised in different countries (de Boysson-Bardies 1993). The main question is not of innateness of perception or action, but rather how infants learn and form selective phonemic categories that make a difference in their language so early in life (Kuhl 2010). ...
Leibovich et al. propose that non-symbolic numerosity abilities develop from the processing of more basic, continuous magnitudes such as size, area, and density. Here I review similar arguments arising in the visual perception field and further propose that the evolvement of discrete representations from continuous stimulus properties may be a fundamental characteristic of cognitive development.
... Very early life experience-dependent bias can even eliminate "innate" abilities as manifest in, for example, the finding that babies lose their ability to perceive multiple phonemic cues (sounds used in languages) that are irrelevant to their language experience before they attain 1 year of age (Eimas 1975;Eimas et al. 1971;Lasky et al. 1975;Werker & Lalonde 1988). On the other hand, by 10 months of age, language-specific differences can be discerned in the babbling of infants raised in different countries (de Boysson-Bardies 1993). The main question is not of innateness of perception or action, but rather how infants learn and form selective phonemic categories that make a difference in their language so early in life (Kuhl 2010). ...
The authors rightly point to the theoretical importance of interactions of space and number through the life span, yet propose a theory with several weaknesses. In addition to proclaiming itself unfalsifiable, its stage-like format and emphasis on the role of selective attention are at odds with what is known about the development of spatial-numerical associations in infancy.
... Very early life experience-dependent bias can even eliminate "innate" abilities as manifest in, for example, the finding that babies lose their ability to perceive multiple phonemic cues (sounds used in languages) that are irrelevant to their language experience before they attain 1 year of age (Eimas 1975;Eimas et al. 1971;Lasky et al. 1975;Werker & Lalonde 1988). On the other hand, by 10 months of age, language-specific differences can be discerned in the babbling of infants raised in different countries (de Boysson-Bardies 1993). The main question is not of innateness of perception or action, but rather how infants learn and form selective phonemic categories that make a difference in their language so early in life (Kuhl 2010). ...
In response to the commentaries, we have refined our
suggested model and discussed ways in which the model could be
further expanded. In this context, we have elaborated on the role
of specific continuous magnitudes. We have also found it important
to devote a section to evidence considered the “smoking gun” of the
approximate number system theory, including cross-modal studies,
animal studies, and so forth. Lastly, we suggested some ways in
which the scientific community can promote more transparent
and collaborative research by using an open science approach,
sharing both raw data and stimuli. We thank the contributors
for their enlightening comments and look forward to future
developments in the field.
... Very early life experience-dependent bias can even eliminate "innate" abilities as manifest in, for example, the finding that babies lose their ability to perceive multiple phonemic cues (sounds used in languages) that are irrelevant to their language experience before they attain 1 year of age (Eimas 1975;Eimas et al. 1971;Lasky et al. 1975;Werker & Lalonde 1988). On the other hand, by 10 months of age, language-specific differences can be discerned in the babbling of infants raised in different countries (de Boysson-Bardies 1993). The main question is not of innateness of perception or action, but rather how infants learn and form selective phonemic categories that make a difference in their language so early in life (Kuhl 2010). ...
In this review, we are pitting two theories against each other: the more accepted theory-the 'number sense' theory-suggesting that a sense of number is innate and non-symbolic numerosity is being processed independently of continuous magnitudes (e.g., size, area, density); and the newly emerging theory suggesting that (1) both numerosities and continuous magnitudes are processed holistically when comparing numerosities, and (2) a sense of number might not be innate. In the first part of this review, we discuss the 'number sense' theory. Against this background, we demonstrate how the natural correlation between numerosities and continuous magnitudes makes it nearly impossible to study non-symbolic numerosity processing in isolation from continuous magnitudes, and therefore the results of behavioral and imaging studies with infants, adults and animals can be explained, at least in part, by relying on continuous magnitudes. In the second part, we explain the 'sense of magnitude' theory and review studies that directly demonstrate that continuous magnitudes are more automatic and basic than numerosities. Finally, we present outstanding questions. Our conclusion is that there is not enough convincing evidence to support the number sense theory anymore. Therefore, we encourage researchers not to assume that number sense is simply innate, but to put this hypothesis to the test, and to consider if such an assumption is even testable in light of the correlation of numerosity and continuous magnitudes.
... Language experience modifies infants' initial sensitivities to speech sounds so by the end of the first year of life sensitivity towards native sound distinctions is not only maintained but even sharpened (Kuhl et al., 2006), while ability to discriminate nonnative sound contrasts has significantly decreased (Werker & Tees, 1984; see also Werker, Yeung, & Yoshida, 2012, for a general review of these perceptual narrowing processes). At the same time, speech production abilities appear very early following a similar developmental trajectory: while early "canonical babbling" (around 7e8 months of age) is still language general, the so-called variegated non-reduplicative babbling, usually appearing by the end of the first year of life (10e12 months of age), not only shows language specific properties (de Boysson-Bardies & Vihman, 1991;De Boysson-Bardies, 1993) but also shows an increase in the quality and tuning of the speech sounds produced (Kuhl & Meltzoff, 1996). As stated in the SCALED model, these language acquisition processes might require the functional maturation of the dorsal pathways in order to convey information from brain areas related to perception and production, especially between posterior superior temporal, inferior parietal and premotor and inferior frontal regions. ...
... Il principale obiettivo di Johnson è stato quello di identificare i cambiamenti nel comportamento osservabile nei primi mesi di vita e nel cercare di metterli in relazione con lo sviluppo o la modificazione funzionale di particolari strutture o connessioni neurali. Oltre a questa prospettiva, sono già disponibili in letteratura altri esempi autorevoli che mostrano quanto i dati provenienti dalle neuroscienze possano portare ad una comprensione più soddisfacente di certe funzioni psicologiche e dei relativi percorsi ontogenetici (de Boysson-Bardies et al., 1993;Elman et al., 1996). Tra questi, particolarmente rilevanti si sono dimostrati gli studi sullo sviluppo della memoria (Nelson, 1995) e sulla percezione delle espressioni facciali nei primi mesi di vita (Nelson e de Haan, 1996). ...
... When presented with face-like and nonface-like patterns, newborns spontaneously look longer at, and orient more frequently toward, upright face-like configurations[26][27]. An upright face preference at birth has been demonstrated with both static and moving stimuli[28][29][30][31], and with both schematic and veridical images of faces[20][21][22]. Note, however, that at birth upright the face preference seems to depend on the existence of general biases that orient newborns' attention toward certain structural properties that faces share with other visual stimuli[32][33][34]. ...
Orienting visual attention allows us to properly select relevant visual information from a noisy environment. Despite extensive investigation of the orienting of visual attention in infancy, it is unknown whether and how stimulus characteristics modulate the deployment of attention from birth to 4 months of age, a period in which the efficiency in orienting of attention improves dramatically. The aim of the present study was to compare 4-month-old infants’ and newborns’ ability to orient attention from central to peripheral stimuli that have the same or different attributes. In Experiment 1, all the stimuli were dynamic and the only attribute of the central and peripheral stimuli to be manipulated was face orientation. In Experiment 2, both face orientation and motion of the central and peripheral stimuli were contrasted. The number of valid trials and saccadic latency were measured at both ages. Our results demonstrated that the deployment of attention is mainly influenced by motion at birth, while it is also influenced by face orientation at 4-month of age. These findings provide insight into the development of the orienting visual attention in the first few months of life and suggest that maturation may be not the only factor that determines the developmental change in orienting visual attention from birth to 4 months.
... There are, however, some researchers that agree that changes in the height dim :msion is easier and precedes changes in the front-back dimension of the oral cavity (Bickley, 1982;Boysson-Bardies, 1993;Hodge, 1989 Chapter 3 ...
... No obstante, un análisis de tres modelos geométricos desde una perspectiva evolutiva puso de manifi esto que consonantes y vocales podrían interactuar en los primeros estadios en un mismo plano (Gierut, Cho y Dinnsen, 1993). Así, distintas investigaciones han encontrado asimilaciones muy tempranas de consonante a vocal (Boysson-Bardies, 1993;Stoel-Gammon, 1983;Tyler y Langsdale, 1996), que en algún caso se han explicado en relación con el sonido que va después de la vocal (Song y Demuth, 2008). ...
... Learning to produce the sounds that will characterize infants as speakers of their "mother tongue" is equally challenging, and is not completely mastered until the age of 8 yr (Ferguson et al. 1992). Yet, by 10 mo of age, differences can be discerned in the babbling of infants raised in different countries (de Boysson-Bardies 1993), and in the laboratory, vocal imitation can be elicited by 20 wk (Kuhl and Meltzoff 1982). The speaking patterns we adopt early in life last a lifetime (Flege 1991). ...
Explaining how every typically developing child acquires language is one of the grand challenges of cognitive neuroscience. Historically, language learning provoked classic debates about the contributions of innately specialized as opposed to general learning mechanisms. Now, new data are being brought to bear from studies that employ magnetoencephalograph (MEG), electroencephalograph (EEG), magnetic resonance imaging (MRI), and diffusion tensor imaging (DTI) studies on young children. These studies examine the patterns of association between brain and behavioral measures. The resulting data offer both expected results and surprises that are altering theory. As we uncover what it means to be human through the lens of young children, and their ability to speak, what we learn will not only inform theories of human development, but also lead to the discovery of neural biomarkers, early in life, that indicate risk for language impairment and allow early intervention for children with developmental disabilities involving language.
Copyright © 2014 Cold Spring Harbor Laboratory Press; all rights reserved.
... In developmental terms, the shared intuitions of synaesthetes and non-synaesthetes have been explained by the neonatal synaesthesia hypothesis (Maurer 1993;Maurer & Maurer 1988;Maurer & Mondloch 2004) which suggests that adult synaesthesia may be a reflection of early synaesthetic states found in all neonates. This theory points out that the brains of both adult synaesthetes and all neonates show abundant cortical connectivity and thicker grey matter (e.g., Gogtay et al., 2004) in turn suggesting that all people may have synaesthesia-like experiences in early life. ...
Sound symbolism is a property of certain words which have a direct link between their phonological form and their semantic meaning. In certain instances, sound symbolism can allow non-native speakers to understand the meanings of etymologically unfamiliar foreign words, although the mechanisms driving this are not well understood. We examined whether sound symbolism might be mediated by the same types of cross-modal processes that typify synaesthetic experiences. Synaesthesia is an inherited condition in which sensory or cognitive stimuli (e.g., sounds, words) cause additional, unusual cross-modal percepts (e.g., sounds trigger colours, words trigger tastes). Synaesthesia may be an exaggeration of normal cross-modal processing, and if so, there may be a link between synaesthesia and the type of cross-modality inherent in sound symbolism. To test this we predicted that synaesthetes would have superior understanding of unfamiliar (sound symbolic) foreign words. In our study, 19 grapheme-colour synaesthetes and 57 non-synaesthete controls were presented with 400 adjectives from 10 unfamiliar languages and were asked to guess the meaning of each word in a two-alternative forced-choice task. Both groups showed superior understanding compared to chance levels, but synaesthetes significantly outperformed controls. This heightened ability suggests that sound symbolism may rely on the types of cross-modal integration that drive synaesthetes' unusual experiences. It also suggests that synaesthesia endows or co-occurs with heightened multi-modal skills, and that this can arise in domains unrelated to the specific form of synaesthesia.
Copyright © 2014. Published by Elsevier B.V.
This study examined phonetic backward transfer in ‘Glaswasians’, the ethnolinguistic minority of first-generation bilingual immigrant Indians in Glasgow (Scotland), who present a situation of contact between their native languages of Hindi and Indian English (L1s) and the dominant host language and dialect, Glaswegian English (L2). This was examined in relation to the Revised Speech Learning Model (SLM-r) and Speech Accommodation Framework. These predict that the migrants’ L1 sound categories can either shift to become more Glaswegian-like (‘assimilation’ or ‘convergence’) or exaggeratedly Indian-like (‘dissimilation’ or ‘divergence’) or remain unchanged. The effect of Indian and Glaswegian Contact on transfer was also investigated. Two control groups (Indians and Glaswegians) and the experimental group (Glaswasians) were recorded reading English and Hindi sentences containing multiple phones which were examined for multiple phonetic features (/t/—VOT, /l/—F2-F1 difference, /b d g/—Relative Burst Intensity). In both languages, Glaswasian /t/ and /g/ became more Glaswegian-like (assimilation), whereas F2-F1 difference in /l/ became exaggeratedly Indian-like (dissimilation). Higher Indian Contact was associated with more native-like values in /t/ and /l/ in Hindi but had no influence on /g/. Higher Glaswegian Contact was related to increased assimilation of /g/ in English but had no effect on /l/ and /t/.
When encountering an unfamiliar accent, a hypothesized perceptual challenge is associating its phonetic realizations with the intended phonemic categories. Greater accumulated exposure to the language might afford richer representations of phonetic variants, thereby increasing the chance of detecting unfamiliar accent speakers’ intended phonemes. The present study examined the extent to which the detection of vowel phonemes spoken in an unfamiliar regional accent of English is facilitated or hindered depending on their acoustic similarity to vowels produced in a familiar accent. Monolinguals, experienced bilinguals and native German second-language (L2) learners completed a phoneme detection task. Based on duration and formant trajectory information, unfamiliar accent speakers’ vowels were classed as acoustically “similar” or “dissimilar” to counterpart phonemes in the familiar accent. All three participant groups were substantially less sensitive to the phonemic identities of “dissimilar” compared to “similar” vowels. Unlike monolinguals and bilinguals, L2 learners showed a response shift for “dissimilar” vowels, reflecting a cautious approach to these items. Monolinguals displayed somewhat heightened sensitivity compared to bilinguals, suggesting that greater accumulated exposure aided phoneme detection for both “similar” and “dissimilar” vowels. Overall, acoustic similarity predicted the relative success of detecting vowel phonemes in cross-dialectal speech perception across groups with varied linguistic backgrounds.
In Nyāya philosophy, a special kind of extraordinary sensory connection is admitted named jñānalakṣaṇā pratyāsatti or jñānalakṣaṇa sannikarṣa. It is held that sometimes our sense-organ can be connected to such an object which is not amenable to the operating sense-organ. In such cases, cognition (jñāna) plays the role of sensory connection and connects the content of itself to the operating sense-organ. The paradigmatic example of jñānalakṣaṇa perception is to ‘see’ fragrant sandal through visual sense from non-smellable distance. This hypothesis of jñānalakṣaṇa has been criticized by the opponents being considered as counterintuitive, mysterious and theoretically overloaded. This paper tries to demystify the notion. It shows that although it seems to be metaphysically mysterious phenomenon at first sight, it is not so at all. The paper explores the psychological process involved in this sensory connection. The hypothesis is shown to have sufficient explanatory power, because the Naiyāyikas have used this hypothesis to explain five different epistemic situations. Hence, this paper argues that it is not a theoretical overload. The opponents counter-argue that all those five cognitive situations can be explained without admitting jñānalakṣaṇa. Moreover, if we admit jñānalakṣaṇa, then a particular kind of inference will become redundant. The paper answers all those objections and defends the hypothesis. The second part of the paper presents an empirical evidence in support of the hypothesis. The arguments leveled against the hypothesis of jñānalakṣaṇa can be contested on the ground that they try to disprove something which is supported on experimental ground. Experiments represent universally acceptable objective facts supported by experience—denying which amounts to anubhavavirodha, which philosophers would want to avoid. Hence, supporting jñānalakṣaṇa on the ground of scientific experiments can be considered as a philosophical stand. Now, there is a clinically recognized and neurophysiologically proved condition, called synaesthesia, where stimulation of a particular sensory modality automatically and involuntarily activates a different sensory modality simultaneously without a direct stimulation of the second modality. As for example, when a sound → colour synaesthete listens to a particular tone such as C-sharp, she visualizes particular colour, such as blue, in her mind’s eye; for a grapheme → colour synaesthete a particular number or alphabet is always tinged with a particular colour. This paper shows that the cognitive process involved in synaesthesia lends support to the hypothesis of jñānalakṣaṇa pratyāsatti. It has been proved through several experiments that it is a genuine perceptual phenomenon and is not a confabulation of memory. There are several alternative theories which explain the phenomenon neurophysiologically. The paper discusses the most popular one: the cross-activation hypothesis. There are two major objections against the project of comparing jñānalakṣaṇa with synaesthesia. First, synaesthesia is a neurological condition present in a few numbers of people whereas jñānalakṣaṇa is claimed to be universal phenomenon. Second, syneasthesia is a sensory experience whereas jñānalakṣaṇa involves application of concepts. The paper answers these questions. Firstly, multimodal processing in the brain is a universal phenomenon; secondly, there is a form of synaesthesia where top-down processing is involved. In those cases, concepts play important role for having synaesthetic experience.
La synesthésie est une condition neurologique dans laquelle une stimulation sensorielle ou cognitive dans une modalité spécifique engendre de façon automatique et involontaire une autre expérience perceptuelle inhabituelle. La synesthésie influencerait le développement de certaines habiletés cognitives, notamment sur le plan mnésique. Par ailleurs, une hypothèse intuitive populaire au sein de la communauté scientifique stipule que ces expériences sensorielles atypiques facilitent la créativité. En effet, comme elles dotent d’un répertoire perceptuel original, il est concevable qu’elles puissent mener à une aptitude à faire des associations non conventionnelles entre diverses catégories d’éléments. Par conséquent, cette recherche qualitative vise à mieux comprendre les influences des expériences synesthésiques sur la créativité, du point de vue de ceux qui vivent ces expériences. Dans le but de mieux comprendre la phénoménologie de la synesthésie en créativité, les questions de notre recherche se résument ainsi : Quel sens peut avoir l’expérience de la synesthésie en créativité et quelles répercussions a-t-elle sur la créativité? Nous sommes intéressés à comprendre comment la synesthésie peut être perçue, avoir de l’influence et être utilisée lors d’un processus créatif. À notre connaissance, il s’agit de la première étude à explorer la phénoménologie de la synesthésie tant auprès de plusieurs individus qu’au sein de plusieurs formes de synesthésie, à l’égard d’une caractéristique soi-disant centrale à cette condition, soit la créativité.
Dans le cadre de cette recherche, 17 personnes avec diverses synesthésies, âgées de 21 ans à 72 ans, ont été rencontrées afin de partager leurs expériences lors d’entretiens individuels semi-dirigés. Ces entrevues ont été retranscrites et analysées d’après la méthode phénoménologique de recherche en psychologie. Au final, la structure fondamentale du phénomène étudié qui émerge de nos analyses comprend les 10 thèmes suivant en ce qui a trait au sens donné à l’expérience de la synesthésie sur le plan de la créativité :
1) L’expérience de la synesthésie oriente, module le processus créatif en fonction des associations synesthésiques afin d’être dans un état particulier et d’accroître le bien-être;
2) L’expérience de la synesthésie constitue un apport sensoriel, perceptuel et émotionnel à la créativité;
3) L’expérience de la synesthésie optimise les fonctions cognitives;
4) L’expérience de la synesthésie alimente la stimulation intellectuelle et la motivation dans un processus créatif;
xvi
5) L’expérience de la synesthésie donne un sens à des éléments et à un vécu;
6) L’expérience de la synesthésie est perçue comme un système instinctif qui influence la créativité, une intuition à l’origine d’un processus de décision ou un réflexe consistant à transposer des associations qui sont indépendantes de la cognition;
7) L’expérience de la synesthésie permet de mieux se connaître, de s’identifier, d’exprimer et d’assumer ce qui constitue son individualité;
8) L’expérience de la synesthésie constitue une courroie de communication avec autrui, d’échange avec d’autres réalités;
9) L’expérience de la synesthésie peut ne pas avoir d’influence sur la créativité; les associations synesthésiques peuvent être perçues comme des états de faits qui sont d’utilité pratique et qui n’ont pas de répercussion sur les choix face à des besoins;
10) L’expérience de la synesthésie peut être un frein à la créativité.
La présente étude a permis de développer une meilleure compréhension des composantes de la créativité, de faire avancer l’état des connaissances sur les expériences synesthésiques et d’identifier le coeur de leurs influences sur la créativité. Pour la première fois, l’étendue des avantages et des inconvénients des expériences synesthésiques sur la créativité a été exposée. Mener cette étude auprès de synesthètes qui présentent une variété de synesthésies a aussi permis d’identifier des processus sous-jacents à la créativité qui sont communs dans le vaste spectre de la synesthésie et d’amener des pistes d’explications quant aux différences individuelles pouvant émerger. Du coup, divers processus qui peuvent influencer et alimenter la créativité chez tous les individus ont été mis en lumière. Enfin, cette recherche peut apporter un éclairage sur les mécanismes universels d’interactions entre les sens et sur leur usage à certaines fins. De futures études avec de larges échantillons de participants devraient tenter de départager différents attributs de la synesthésie et d’explorer leur contribution respective dans divers aspects de la cognition et des émotions. Les répercussions de l’exploitation d’associations automatiques, synesthésiques ou non, pourraient aussi faire l’objet d’études dans les cadres d’interventions thérapeutiques et pédagogiques. Enfin, la mise sur pied de regroupements de chercheurs en synesthésie et de synesthètes pourrait stimuler le développement et la diffusion des connaissances sur la synesthésie et sur le potentiel humain, démarginaliser cette condition et déconstruire des conceptions erronées, et faire accroître l’épanouissement des individus de la grande communauté synesthète.
While previous research has shown that bilinguals are able to effectively maintain two sets of phonetic norms, these two phonetic systems experience varying degrees of cross-linguistic influence, driven by both long-term (e.g., proficiency, immersion) and short-term (e.g., bilingual language contexts, code-switching, sociolinguistic) factors. This study examines the potential for linguistic environment, or the language norms of the broader community in which an interaction takes place, to serve as a source of short-term cross-linguistic phonetic influence. To investigate the role of linguistic environment, late bilinguals (L1 English—L2 Spanish) produced Spanish utterances in two sessions that differed in their linguistic environments: an English-dominant linguistic environment (Indiana, USA) and a Spanish-dominant linguistic environment (Madrid, Spain). Productions were analyzed at the fine-grained acoustic level, through an acoustic analysis of voice onset time, as well as more holistically through native speaker global accent ratings. Results showed that linguistic environment did not significantly impact either measure of phonetic production, regardless of a speaker’s second language proficiency. These results, in conjunction with previous results on long- and short-term sources of phonetic influence, suggest a possible primacy of the immediate context of an interaction, rather than broader community norms, in determining language mode and cross-linguistic influence.
L’objectif général de la thèse est de définir les influences réciproques entre le langage oral et le langage écrit. Pour cela, la co-activation des codes orthographiques et phonologiques a été étudiée chez l’adulte et chez l’enfant, à l’oral et à l’écrit. En production écrite chez l’adulte, les codes phonologiques ne sont activés qu’avec des stimuli auditifs (Expériences 1 à 4). De plus, même lorsque leur activation est facilitée, les codes phonologiques ne semblent pas être activés avec des stimuli visuels (Expérience 5 et 6). Chez l’enfant, aucune réponse n’a pu être apportée concernant l’influence de l’oral sur l’écrit (Expérience 7). En production orale chez l’adulte, les codes orthographiques ne sont pas systématiquement activés (Expérience 8). Toutefois, ils peuvent jouer un rôle lorsque leur activation est facilitée (Expérience 9 à 12). Chez l’enfant, aucune réponse n’a pu être apportée concernant l’influence de l’écrit sur l’oral (Expérience 13). La co-activation des codes phonologiques et orthographiques n’est donc pas systématique en production écrite comme en production orale. La production orale semble en revanche davantage influencée par le langage écrit. L’influence entre le langage oral et le langage écrit serait donc dissymétrique.
Disponible ici : https://hal.archives-ouvertes.fr/tel-01996879v1
A közelmúlt során megalakult Tipikus és Atipikus Gyermeknyelvfejlődési Kutatócsoport célja a gyermekek kora életévétől a kisiskoláskor végéig tartó nyelvi fejlődésének és zavarainak nyomon követése, dokumentálása, valamint az atipikus fejlődés terápiás hatásainak felkutatása. A tudományos csoport vizsgálni kívánja továbbá a nyelvi és kognitív képességek összefüggéseit, a többnyelvűség kérdéseit és ezek megjelenési formáit a társadalomban. Közismert tény, hogy a nyelvi képesség elsajátítása egy olyan eseményekben gazdag út, amely az anyaméh világában kezdődik és még a serdülőkoron túl is folytatódik. Mivel a nyelvi fejlődés folyamata viszonylag hosszú és számos kihívással jár, ezért a jelen tanulmány, a kutatócsoport megalakulása kapcsán csupán egy rövid, de maghatározó periódust kíván ebből kiemelni. Pontosan azt a szakaszt érinti, amelyben a kezdetben ügyetlennek tűnő hangadási próbálkozásoktól az akaratlagos mozzanatokig tartó alapozó időszakba jut a gyermek, ahol az artikulációs rendszere az életkori beszédhibák mellett éretté válik az első érthető szavak kiejtésére.
«L'ombilic psychosomatique et la pathologie du double. Perspectives théoriques et cliniques.», il s'agit d'une recherche explorant les relations précoces mère/enfant, relation nommée "en double" qui peuvent déterminer une psychopathologie spécifique dans le cas de fixations précoces. La recherche exposée s'appuie sur une clinique psychiatrique adulte.
This article examines English vowel perception by advanced Polish learners of English in a formal classroom setting (i.e., they learnt English as a foreign language in school while living in Poland). The stimuli included 11 English noncewords in bilabial (/bVb/), alveolar (/dVd/) and velar (/gVg/) contexts. The participants, 35 first-year English majors, were examined during the performance of three tasks with English vowels: a categorial discrimination oddity task, an L1 assimilation task (categorization and goodness rating) and a task involving rating the (dis-)similarities between pairs of English vowels. The results showed a variety of assimilation types according to the Perceptual Assimilation Model (PAM) and the expected performance in a discrimination task. The more difficult it was to discriminate between two given vowels, the more similar these vowels were judged to be. Vowel contrasts involving height distinctions were easier to discriminate than vowel contrasts with tongue advancement distinctions. The results also revealed that the place of articulation of neighboring consonants had little effect on the perceptibility of the tested English vowels, unlike in the case of lower-proficiency learners. Unlike previous results for naïve listeners, the present results for advanced learners showed no adherence to the principles of the Natural Referent Vowel framework. Generally, the perception of English vowels by these Polish advanced learners of English conformed with PAM's predictions, but differed from vowel perception by naïve listeners and lower-proficiency learners.
In response to the commentaries, we have refined our suggested model and discussed ways in which the model could be further expanded. In this context, we have elaborated on the role of specific continuous magnitudes. We have also found it important to devote a section to evidence considered the “smoking gun” of the approximate number system theory, including cross-modal studies, animal studies, and so forth. Lastly, we suggested some ways in which the scientific community can promote more transparent and collaborative research by using an open science approach, sharing both raw data and stimuli. We thank the contributors for their enlightening comments and look forward to future developments in the field.
This study investigates infants' transition from nonverbal to verbal communication using evidence from regression patterns. As an example of regressions, prelinguistic infants learning American Sign Language (ASL) use pointing gestures to communicate. At the onset of single signs, however, these gestures disappear. Petitto (1987) attributed the regression to the children's discovery that pointing has two functions, namely, deixis and linguistic pronouns. The 1:2 relation (1 form, 2 functions) violates the simple 1:1 pattern that infants are believed to expect. This kind of conflict, Petitto argued, explains the regression. Based on the additional observation that the regression coincided with the boundary between prelinguistic and linguistic communication, Petitto concluded that the prelinguistic and linguistic periods are autonomous. The purpose of the present study was to evaluate the 1:1 model and to determine whether it explains a previously reported regression of intonation in English. Background research showed that gestures and intonation have different forms but the same pragmatic meanings, a 2:1 form–function pattern that plausibly precipitates the regression. The hypothesis of the study was that gestures and intonation are closely related. Moreover, because gestures and intonation change in the opposite direction, the negative correlation between them indicates a robust inverse relationship. To test this prediction, speech samples of 29 infants (8–16 months) were analyzed acoustically and compared to parent-report data on several verbal and gestural scales. In support of the hypothesis, gestures alone were inversely correlated with intonation. In addition, the regression model explains nonlinearities stemming from different form–function configurations. However, the results failed to support the claim that regressions linked to early words or signs reflect autonomy. The discussion ends with a focus on the special role of intonation in children's transition from “prelinguistic” communication to language.
Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating ‘perceptual narrowing’ in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms. Here I review evidence from production and then consider how the implicit and explicit learning mechanisms assumed by the complementary memory systems model might be understood as reconciling the two approaches.
Virginia Woolf wrote in her suicide note that: ‘I feel certain that I am going mad again: I feel we can’t go through another of those terrible times. And I shan’t recover this time. I begin to hear voices, and can’t concentrate. So I am doing what seems the best thing to do’ (Virginia Woolf to Leonard Woolf, 18 March 1941, quoted in Bell 1972). Her depression was accompanied by such psychotic features as auditory hallucinations at many different periods of her life. As far back as 1921 she was writing in her memoir ‘Old Bloomsbury’ that: ‘I had lain in bed at the Dickinsons’ house at Welwyn thinking that the birds were singing choruses and that King Edward was using the foulest possible language among the Ozzie Dickinson’s azaleas’ (Woolf 1921). Her husband Leonard Woolf writes that: ‘She spoke somewhere about ‘the voices that fly ahead’, and she followed them … when she was at her worst and her mind was completely breaking down again the voices flew ahead of her thoughts: and she actually heard voices which were not her voice; for instance, she thought she heard the sparrows outside the window talking in Greek. When that happened to her, in one of her attacks, she became incoherent because what she was hearing and the thoughts flying ahead of her became completely disconnected’ (Woolf 1995; Bell 1972). There is also evidence that on rare occasions she suffered from visual hallucinations. For instance, she says on becoming less obsessed with her deceased mother, that ‘I no longer hear her voice; I do not see her’ (Woolf 1939). This chapter presents what we now know of the failure of brain function leading to the distortions of consciousness as occurs in auditory and visual hallucinations.
This book provides an overview by international authorities, spanning the disciplines of neuroscience, psychology, ophthalmology, optometry, and paediatrics, of normal and pathological infant visual development. It covers the development of retinal receptors; infant sensitivity to detail, colour, contrast, and movement; binocularity, eye movements, and refraction; and cognitive processing. Childrens' visual deficits, including amblyopia and cataract, are covered.
Vervet monkeys are an appropriate species to model aspects of the neurobiology of human social decision-making. Vervets are phylogenetically closely related to human beings and possess large brains that show substantial postnatal maturation, exhibit rich behavioral repertoires, and establish enduring and complex social relationships. This chapter describes two types of investigations that examined the links between monoaminergic neurotransmitter systems, frontal lobe function, and effective social decision-making in vervet monkeys. One set examined indices of serotonergic function while the other utilized positron emission tomography (PET) to evaluate cerebral glucose metabolism.
There are periods in development during which experience plays its largest role in shaping the eventual structure and function of mature language-processing systems. These spans of peak cortical plasticity have been called “sensitive periods.” Here, we describe a series of studies investigating the effects of delays in second language (L2) acquisition on different subsystems within language. First, we review the effects of the altered language experience of congenitally deaf subjects on cerebral systems important for processing written English and American Sign Language (ASL). Second, we present behavioral and electrophysiological studies of L2 semantic and syntactic processing in Chinese-English bilinguals who acquired their second language over a wide range of ages. Third, we review semantic, syntactic, and prosodic processing in native Spanish and native Japanese late-learners of English. These approaches have provided converging evidence, indicating that delays in language acquisition have minimal effects on some aspects of semantic processing. In contrast, delays of even a few years result in deficits in some types of syntactic processing and differences in the organization of cortical systems used to process syntactic information. The different subsystems of language which rely on different cortical areas, including semantic, syntactic, phonological, and prosodic processing, may have different developmental time courses that in part determine the different sensitive period effects observed. Humans, in comparison to other animals, go through a protracted period of post-natal development that lasts at least 15 years (Chugani & Phelps, 1986; Huttenlocher, 1990). © Cambridge University Press 2008 and Cambridge University Press, 2009.
Developmental theories of face perception and speech perception have similar goals. Theorists in both domains seek to explain infants’ early sophistication with regard to the detection and/or discrimination of facial and speech stimuli and to determine whether infants’ early abilities are due to mechanisms dedicated to the processing of specific biologically relevant stimuli or more general sensory/cognitive mechanisms. In addition, theorists in both domains seek to explain how experience with specific faces and speech sounds modifies infants’ perception. In this chapter, studies showing enhanced discriminability at phonetic boundaries, as well as studies on the perception of phonetic prototypes, exceptionally good instances representing the centers of phonetic categories, are described. The studies show that although phonetic boundary effects are common to monkey and man, prototype effects are not. For human listeners prototypes play a unique role in speech perception. They function like “perceptual magnets, ” attracting nearby members of the category. By 6 months of age the prototype’s perceptual magnet effect is language-specific. Exposure to a specific language thus alters infants’ perception prior to the acquisition of word meaning and linguistic contrast. These results support a new theory, the Native Language Magnet (NLM) theory, which describes how innate factors and early experience with language interact in the development of speech perception.
This study provided longitudinal examination of the Chinese learners` acquisition of Korean vowels. Specifically, I examined the Chinese learners` Korean monophthongs /i, e, ɨ, {\Lambda}, a, u, o/ that were created at the time of 1 month and 12 months, tried to verify empirically how they learn by dealing with their mother tongue, and Korean vowels through dealing with pattern of the Perceptual Assimilation Model (henceforth PAM) of Best (Best, 1993; 1994; Best & Tyler, 2007) and the Speech Learning Model (henceforth SLM) of Flege (Flege, 1987; Bohn & Flege, 1992, Flege, 1995). As a result, most of the present results are shown to be similarly explained by the PAM and SLM, and the only discrepancy between these two models is found in the `similar` category of sounds between the learners` native language and the target language. Specifically, the acquisition pattern of /u/ and /o/ in Korean is well accounted for the PAM, but not in the SLM. The SLM did not explain why the Chinese learners had difficulty in acquiring the Korean vowel /u/, because according to the SLM, the vowel /u/ in Chinese (the native language) is matched either to the vowel /u/ or /o/ in Korean (the target language). Namely, there is only a one-to-one matching relationship between the native language and the target language. In contrast, the Chinese learners` difficulty for the Korean vowel /u/ is well accounted for in the PAM in that the Chinese vowel /u/ is matched to the vowel pair /o, u/ in Korean, not the single vowel, /o/ or /u/.
Language and face processing develop in similar ways during the first year of life. Early in the first year of life, infants demonstrate broad abilities for discriminating among faces and speech. These discrimination abilities then become tuned to frequently experienced groups of people or languages. This process of perceptual development occurs between approximately 6 and 12 months of age and is largely shaped by experience. However, the mechanisms underlying perceptual development during this time, and whether they are shared across domains, remain largely unknown. Here, we highlight research findings across domains and propose a top-down/bottom-up processing approach as a guide for future research. It is hypothesized that perceptual narrowing and tuning in development is the result of a shift from primarily bottom-up processing to a combination of bottom-up and top-down influences. In addition, we propose word learning as an important top-down factor that shapes tuning in both the speech and face domains, leading to similar observed developmental trajectories across modalities. Importantly, we suggest that perceptual narrowing/tuning is the result of multiple interacting factors and not explained by the development of a single mechanism.
ResearchGate has not been able to resolve any references for this publication.