Article

Acquisition of Sign Languages

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Natural sign languages of deaf communities are acquired on the same time scale as that of spoken languages if children have access to fluent signers providing input from birth. Infants are sensitive to linguistic information provided visually, and early milestones show many parallels. The modality may affect various areas of language acquisition; such effects include the form of signs (sign phonology), the potential advantage presented by visual iconicity, and the use of spatial locations to represent referents, locations, and movement events. Unfortunately, the vast majority of deaf children do not receive accessible linguistic input in infancy, and these children experience language deprivation. Negative effects on language are observed when first-language acquisition is delayed. For those who eventually begin to learn a sign language, earlier input is associated with better language and academic outcomes. Further research is especially needed with a broader diversity of participants. Expected final online publication date for the Annual Review of Linguistics, Volume 7 is January 14, 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Based on a general consensus that the reported poor academic achievement performances are not a direct consequence of hearing loss (Marschark, 1993;Moores, 2001;Niederberger and Prinz, 2005;Convertino et al., 2009;Hall, 2015), several scholars centered their endeavor on the potential paths of linguistic and metalinguistic transfer offered by SL as a medium of instruction in deaf classrooms. Several of these studies reported adequate and increased academic performance of deaf students when exposed to SL as a medium of instruction in bilingual educational contexts (Nover et al., 1998;Rudner et al., 2015;Holmer et al., 2016;Hrastinski and Wilbur, 2016;Scott and Hoffmeister, 2017;Sambu et al., 2018;Allen and Morere, 2020;Lillo-Martin and Henner, 2021). ...
... According to Cummins, the mastery of L1 can only support L2 learning if adequate exposure to L2 exists as well as the motivation to learn. Conceptual and cognitive knowledge acquired in L1 can then be used to facilitate the acquisition of proficiency in L2 (Nover et al., 1998;Hrastinski and Wilbur, 2016;Allen and Morere, 2020;Lillo-Martin and Henner, 2021). ...
... Numerous researchers transcended the linguistic aspects to explains that deaf bilingualism is not limited to the linguistic and metalinguistic aspects of language learning (Dalle, 2003;Ohna, 2004;Leigh, 2009;Maxwell-McCaw and Zea, 2011;Grosjean, 2010;Bedoin, 2018). Several socio-cultural and ethnolinguistic factors intervene in the learning dynamics of SL as well as the majority spoken language. ...
Article
Full-text available
Deaf educational methods have been the subject of controversy among advocates of the oralist and the bilingual approaches for centuries. Over the past decades, the bilingual-bicultural method has proved its effectiveness in facilitating formal school learning and downscaling a higher rate of illiteracy compared to the hearing population. The bilingual-bicultural model in Western countries is designed and implemented in predominantly monolingual contexts or multilingual contexts with a dominant majority language. It aims at providing deaf learners with a simultaneous dual access to the deaf and hearing cultures through sign language and the written form of the majority spoken language. The objective of this dual access is to create a balanced form of bilingualism which will reinforce literacy development. In the Western context, the relative proximity of the written and spoken forms of the majority language allows the written form to function as a means of access to the socio-cultural heritage of the hearing community and to develop a sufficient degree of autonomy in a world where literacy became crucial. The application of the Western bilingual-bicultural model may at first glance seem tempting to mitigate a significant rate of illiteracy affecting 98% of the deaf Tunisian population. However, the diglossic situation in Tunisia, and in the Maghreb countries in general, rests upon the existence of two linguistic forms exhibiting considerable linguistic differences. On one hand, the Tunisian Dialectal Arabic (TDA) is the spoken form, and is the vehicle for the Tunisian socio-cultural heritage transmission. On the other hand, the written form, Modern Standard Arabic (AMS), assumes the role of institutional and literacy language. This particular situation requires a specific educational framework different from the classical bilingual-bicultural approach. We hypothesize that without taking into account Tunisian Dialectal Arabic, learners will not access the Tunisian hearing culture. This situation will potentially hinder literacy development in Modern Standard Arabic. Our article puts forward a trilingual-bicultural educational model adapted to the Tunisian diglossic situation. It includes TSL, and written ADT, as representatives of the deaf and hearing cultures which will both contribute to a more fluid development in a third language, written MSA, as the literacy language.
... In these examples, turn timing and responsiveness slowed down for children as they reached various cognitive and social milestones. While DHH children who acquire a national sign language at home will proceed through the language acquisition process along a similar timeline to spoken language acquisition (Newport and Meier, 1985;Lillo-Martin and Henner, 2021), and with similar parallel cognitive and social developmental milestones, DHH children who acquire a sign language at school 9 enter this ecology at a much later stage of cognitive and social development, in addition to the differences between home and school social settings (Singleton and Morgan, 2006). ...
Article
Full-text available
The task of transitioning from one interlocutor to another in conversation – taking turns – is a complex social process, but typically transpires rapidly and without incident in conversations between adults. Cross-linguistic similarities in turn timing and turn structure have led researchers to suggest that it is a core antecedent to human language and a primary driver of an innate “interaction engine.” This review focuses on studies that have tested the extent of turn timing and turn structure patterns in two areas: across language modalities and in early language development. Taken together, these two lines of research offer predictions about the development of turn-taking for children who are deaf or hard of hearing (DHH) acquiring sign languages. We introduce considerations unique to signed language development – namely the heterogenous ecologies in which signed language acquisition occurs, suggesting that more work is needed to account for the diverse circumstances of language acquisition for DHH children. We discuss differences between early sign language acquisition at home compared to later sign language acquisition at school in classroom settings, particularly in countries with national sign languages. We also compare acquisition in these settings to communities without a national sign language where DHH children acquire local sign languages. In particular, we encourage more documentation of naturalistic conversations between DHH children who sign and their caregivers, teachers, and peers. Further, we suggest that future studies should consider: visual/manual cues to turn-taking and whether they are the same or different for child or adult learners; the protracted time-course of turn-taking development in childhood, in spite of the presence of turn-taking abilities early in development; and the unique demands of language development in multi-party conversations that happen in settings like classrooms for older children versus language development at home in dyadic interactions.
... Unfortunately, a number of the signs in the SSD do not achieve the required level of effective communication. This may be due to some teachers' lack of coordination and organization to become fluent signers (Chen Pichler & Koulidobrova, 2015;Lillo-Martin & Henner, 2021), or that the SDD does not contain enough signs to assist teachers in becoming fluent signers, which hinders the communication and learning processes of deaf students in their schools and societies. Accordingly, teachers' mastery of sign language is necessary for classroom management, especially concerning the human relations that must be established with learners, the recipients. ...
Article
Full-text available
Communication through sign language is essential for teachers of deaf students. This study sought to assess and evaluate the sign language proficiency of preservice teachers of deaf students to help preservice teacher preparation program designers identify what aspects of sign language need to be focused on and provide recommendations to improve preservice teachers' sign language levels. An exploratory research design was used through questionnaires distributed to a convenience sample. The research subjects were undergraduate female students (N = 36) enrolled in a Saudi Arabian university's preservice preparation program for teachers of deaf students. This study's results indicate that preservice teachers of deaf and hard of hearing students scored highly for lexical signs, on an average level for iconic lexical signs, but on a low level for the domain of arbitrary lexical signs. There was a significant effect of participants' grade point averages (GPAs) on their overall sign language proficiency score. No significant effect of age, academic level, and the number of completed sign language training on overall sign language proficiency score was reported. This study's outcomes show that preservice teachers' sign language level needs to be improved and developed. Recommendations are presented for future research and preservice teacher preparation program designers to develop learners' sign language skills.
... In sign languages, spatial relations between objects are most frequently encoded by classifier constructions (e.g., Emmorey, 2002;Manhardt et al., 2020;Perniss et al., 2015;Schembri, 2003;Sümer, 2015;Zwitserlood, 2012). These 1 Following several scholars (e.g., Lillo-Martin & Henner, 2021;Mayberry, 1998;Newport, 1990), we use the term native to refer to deaf individuals who have a sign language exposure immediately following birth from their signing deaf parents. We use the term late to refer deaf individuals who have a sign language exposure mostly at the school for the deaf from their deaf signing peers. ...
Article
Prior work with hearing children acquiring a spoken language as their first language shows that spatial language and cognition are related systems and spatial language use predicts spatial memory. Here, we further investigate the extent of this relationship in signing deaf children and adults and ask if late sign language exposure, as well as the frequency and the type of spatial language use that might be affected by late exposure, modulate subsequent memory for spatial relations. To do so, we compared spatial language and memory of 8-year-old late-signing children (after 2 years of exposure to a sign language at the school for the deaf) and late-signing adults to their native-signing counterparts. We elicited picture descriptions of Left-Right relations in Turkish Sign Language (Türk İşaret Dili) and measured the subsequent recognition memory accuracy of the described pictures. Results showed that late-signing adults and children were similar to their native-signing counterparts in how often they encoded the spatial relation. However, late-signing adults but not children differed from their native-signing counterparts in the type of spatial language they used. However, neither late sign language exposure nor the frequency and type of spatial language use modulated spatial memory accuracy. Therefore, even though late language exposure seems to influence the type of spatial language use, this does not predict subsequent memory for spatial relations. We discuss the implications of these findings based on the theories concerning the correspondence between spatial language and cognition as related or rather independent systems.
... Although much research has found that earlier interventions with cochlear implants or hearing aids lead to better spoken language outcomes, age-appropriate language development does not occur for many DHH children, even using modern hearing technologies (see, e.g., the range of spoken-language outcomes reported for children with cochlear implants in Dettman et al., 2016). In contrast, sign languages are acquired by deaf children with adequate language exposure along a similar timeline to that of hearing children acquiring spoken language (e.g., Lillo-Martin & Henner, 2021). Recent findings show that DHH infants whose hearing parents began signing with them in early infancy show vocabulary sizes comparable to those of DHH infants who had deaf signing parents (Caselli et al., 2021). ...
Article
Full-text available
Much research has found disrupted executive functioning (EF) in deaf and hard-of-hearing (DHH) children; while some theories emphasize the role of auditory deprivation, others posit delayed language experience as the primary cause. This study investigated the role of language and auditory experience in parent-reported EF for 123 preschool-aged children (Mage = 60.1 months, 53.7% female, 84.6% White). Comparisons between DHH and typically hearing children exposed to language from birth (spoken or signed) showed no significant differences in EF despite drastic differences in auditory input. Linear models demonstrated that earlier language exposure predicted better EF (β = .061-.341), while earlier auditory exposure did not. Few participants exhibited clinically significant executive dysfunction. Results support theories positing that language, not auditory experience, scaffolds EF development.
... Central to the study of bi-and multilingual development is the notion that language competence and accessible language are necessary for learning (e.g., Gibson et al., 1997;Lillo-Martin & Henner, 2021; see Dostal & Graham, 2021, for a discussion). As a result, it is important to consider several essential and unique variables in order to avoid making false comparisons between students and programs when presenting or interpreting data from research about bi-and multilingual education. ...
Article
This article was written as a rejoinder to Mayer & Trezek’s (2020) review of the literature regarding the literacy achievement of deaf children who are being educated in schools and programs that espouse a bilingual ASL/English instruction approach. This rejoinder has two purposes: First, to outline factors that we suggest are important for all researchers and practitioners who generate and consume knowledge regarding bi- and multi-lingual deaf education. Second, to respond to Mayer & Trezek’s article. Specifically, we recommend careful attention to and inclusion of individual and school level variables, use of appropriate comparison groups, and a valuing of information that uses various methodologies (both quantitative and qualitative).These recommendations are made in the spirit of improving the state of knowledge and production/consumption of research that informs policy and practice in bi- and multi-lingual deaf education.
... Deaf children born to signing parents acquire sign language the same way that hearing children born to speaking parents acquire spoken language (Petitto & Marentette, 1991;Lillo-Martin & Henner, 2020)they go through the same stages of language acquisition with approximately equivalent timing. Most deaf children, however, are born to hearing parents who do not know sign language and may not seek out sign language input for their children (despite demonstrated positive effects of learning a sign language and no known negative effects, Humphries et al., 2016;Hall, Hall, & Caselli, 2019). 1 Deaf children who do not learn to speak and have no access to sign language lack linguistic input. ...
Article
Linguistic input has an immediate effect on child language, making it difficult to discern whatever biases children may bring to language-learning. To discover these biases, we turn to deaf children who cannot acquire spoken language and are not exposed to sign language. These children nevertheless produce gestures, called homesigns, which have structural properties found in natural language. We ask whether these properties can be traced to gestures produced by hearing speakers in Nicaragua, a gesture-rich culture, and in the USA, a culture where speakers rarely gesture without speech. We studied 7 homesigning children and hearing family members in Nicaragua, and 4 in the USA. As expected, family members produced more gestures without speech, and longer gesture strings, in Nicaragua than in the USA. However, in both cultures, homesigners displayed more structural complexity than family members, and there was no correlation between individual homesigners and family members with respect to structural complexity. The findings replicate previous work showing that the gestures hearing speakers produce do not offer a model for the structural aspects of homesign, thus suggesting that children bring biases to construct, or learn, these properties to language-learning. The study also goes beyond the current literature in three ways. First, it extends homesign findings to Nicaragua, where homesigners received a richer gestural model than USA homesigners. Moreover, the relatively large numbers of gestures in Nicaragua made it possible to take advantage of more sophisticated statistical techniques than were used in the original homesign studies. Second, the study extends the discovery of complex noun phrases to Nicaraguan homesign. The almost complete absence of complex noun phrases in the hearing family members of both cultures provides the most convincing evidence to date that homesigners, and not their hearing family members, are the ones who introduce structural properties into homesign. Finally, by extending the homesign phenomenon to Nicaragua, the study offers insight into the gestural precursors of an emerging sign language. The findings shed light on the types of structures that an individual can introduce into communication before that communication is shared within a community of users, and thus sheds light on the roots of linguistic structure.
Article
Purpose This case study describes the language evaluation and treatment of a 5-year-old boy, Lucas, who is Deaf, uses American Sign Language (ASL), and presented with a language disorder despite native access to ASL and no additional diagnosis that would explain the language difficulties. Method Lucas participated in an evaluation where his nonverbal IQ, fine motor, and receptive/expressive language skills were assessed. Language assessment included both formal and informal evaluation procedures. Language intervention was delivered across 7 weeks through focused stimulation. Results Evaluation findings supported diagnosis of a language disorder unexplained by other factors. Visual analysis revealed an improvement in some behaviors targeted during intervention (i.e., number of different verbs and pronouns), but not others. In addition, descriptive analysis indicated qualitative improvement in Lucas' language production. Parent satisfaction survey results showed a high level of satisfaction with therapy progress, in addition to a belief that Lucas improved in language areas targeted. Conclusions This study adds to the growing body of literature that unexplained language disorders in signed languages exist and provides preliminary evidence for positive outcomes from language intervention for a Deaf signing child. The case described can inform professionals who work with Deaf signing children (e.g., speech-language pathologists, teachers of the Deaf, and parents of Deaf children) and serve as a potential starting point in evaluation and treatment of signed language disorders. Supplemental Material https://doi.org/10.23641/asha.16725601
Article
Studies with Deaf and blind individuals demonstrate that linguistic and sensory experiences during sensitive periods have potent effects on neurocognitive basis of language. Native users of sign and spoken languages recruit similar fronto-temporal systems during language processing. By contrast, delays in sign language access impact proficiency and the neural basis of language. Analogously, early but not late-onset blindness modifies the neural basis of language. People born blind recruit ‘visual’ areas during language processing, show reduced left-lateralization of language and enhanced performance on some language tasks. Sensitive period plasticity in and outside fronto-temporal language systems shapes the neural basis of language.
Article
Deaf children whose hearing losses are so severe that they cannot acquire spoken language, and whose hearing parents have not exposed them to sign language, use gestures called homesigns to communicate. Homesigns have been shown to contain many of the properties of natural languages. Here we ask whether homesign has structure building devices for negation and questions. We identify two meanings (negation, question) that correspond semantically to propositional functions, that is, to functions that apply to a sentence (whose semantic value is a proposition, ϕ) and yield another proposition that is more complex (⌝ϕ for negation; ?ϕ for question). Combining ϕ with ⌝ or ? thus involves sentence modification. We propose that these negative and question functions are structure building operators, and we support this claim with data from an American homesigner. We show that: (a) each meaning is marked by a particular form in the child's gesture system (side-to-side headshake for negation, manual flip for question); (b) the two markers occupy systematic, and different, positions at the periphery of the gesture sentences (headshake at the beginning, flip at the end); and (c) the flip is extended from questions to other uses associated with the wh-form (exclamatives, referential expressions of location) and thus functions like a category in natural languages. If what we see in homesign is a language creation process (Goldin-Meadow, 2003), and if negation and question formation involve sentential modification, then our analysis implies that homesign has at least this minimal sentential syntax. Our findings thus contribute to ongoing debates about properties that are fundamental to language and language learning.
Article
Full-text available
Deaf late signers provide a unique perspective on the impact of impoverished early language exposure on the neurobiology of language: insights that cannot be gained from research with hearing people alone. Here we contrast the effect of age of sign language acquisition in hearing and congenitally deaf adults to examine the potential impact of impoverished early language exposure on the neural systems supporting a language learnt later in life. We collected fMRI data from deaf and hearing proficient users (N = 52) of British Sign Language (BSL), who learnt BSL either early (native) or late (after the age of 15 years) whilst they watched BSL sentences or strings of meaningless nonsense signs. There was a main effect of age of sign language acquisition (late > early) across deaf and hearing signers in the occipital segment of the left intraparietal sulcus. This finding suggests that late learners of sign language may rely on visual processing more than early learners, when processing both linguistic and nonsense sign input – regardless of hearing status. Region-of-interest analyses in the posterior superior temporal cortices (STC) showed an effect of age of sign language acquisition that was specific to deaf signers. In the left posterior STC, activation in response to signed sentences was greater in deaf early signers than deaf late signers. Importantly, responses in the left posterior STC in hearing early and late signers did not differ, and were similar to those observed in deaf early signers. These data lend further support to the argument that robust early language experience, whether signed or spoken, is necessary for left posterior STC to show a ‘native-like’ response to a later learnt language.
Article
Full-text available
In this article we discuss the practice and politics of translanguaging in the context of deaf signers. Applying the translanguaging concept to deaf signers brings a different perspective by focusing on sensorial accessibility. While the sensory orientations of deaf people are at the heart of their translanguaging practices, sensory asymmetries are often not acknowledged in translanguaging theory and research. This has led to a bias in the use of translanguaging in deaf educational settings overlooking existing power disparities conditioning individual languaging choices. We ask whether translanguaging and attending to deaf signers' fluid language practices is compatible with ongoing and necessary efforts to maintain and promote sign languages as named languages. The concept of translanguaging challenges the six decade long project of sign linguistics and by extension Deaf Studies to legitimize the status of sign languages as minority languages. We argue that the minority language paradigm is still useful in finding tools to understand deaf people's languaging practices and close with a call for closer attention to the level of sensory conditions, and the corresponding sensory politics, in shaping languaging practices. The emancipatory potential of acknowledging deaf people's translanguaging skills must acknowledge the historical and contemporary contexts constantly conditioning individual languaging choices. ARTICLE HISTORY
Article
Full-text available
Discussions on disability justice within the university have centered disabled students but leaves us with questions about disability justice for the disabled scholar and disabled communities affiliated with universities through the lens of signed language instruction and deaf people. Universities use American Sign Language (ASL) programs to exploit the labors of deaf people without providing a return to disabled communities or disabled academics. ASL courses offers valuable avenues for cripping the university. Through the framework of cripping, we argue universities that offer ASL classes and profit from them have an obligation to ensure that disabled students and disabled academics are able to navigate and succeed in their systems. Disabled students, communities, and academics should capitalize upon the popularity of ASL to expand accessibility and the place of disability in higher education.
Article
Full-text available
Purpose This article examines whether syntactic and vocabulary abilities in American Sign Language (ASL) facilitate 6 categories of language-based analogical reasoning. Method Data for this study were collected from 267 deaf participants, aged 7;6 (years;months) to 18;5. The data were collected from an ongoing study initially funded by the U.S. Institute of Education Sciences in 2010. The participants were given assessments of ASL vocabulary and syntax knowledge and a task of language-based analogies presented in ASL. The data were analyzed using mixed-effects linear modeling to first see how language-based analogical reasoning developed in deaf children and then to see how ASL knowledge influenced this developmental trajectory. Results Signing deaf children were shown to demonstrate language-based reasoning abilities in ASL consistent with both chronological age and home language environment. Notably, when ASL vocabulary and syntax abilities were statistically taken into account, these were more important in fostering the development of language-based analogical reasoning abilities than were chronological age and home language. We further showed that ASL vocabulary ability and ASL syntactic knowledge made different contributions to different analogical reasoning subconstructs. Conclusions ASL is a viable language that supports the development of language-based analogical reasoning abilities in deaf children.
Article
Full-text available
The assessments designed for and analyzed in this study used a task-based language design template rooted in theories of language reflecting heteroglossic language practices and funds of knowledge learning theories, which were understood as transforming classroom teaching, learning, and assessment through continua of biliteracy lenses. Using a participatory action research model, we created assessment instruments for pre-service English teachers in Oaxaca, Mexico, integrating language practices from communities and classrooms into assessments. Participants completed two reading and writing tasks. Task 1 was intentionally designed to engage learners’ English and Spanish languages resources. Task 2 was restricted to English-only. Our analyses indicated (1) that pre-service English teachers performed better on the multilingual task than the monolingual English task at a level of statistical significance and (2) that integrating multilingual resources within assessment design can allow test-takers to demonstrate more complex or high-order thinking skills in the language they are learning. We are offering some empirical evidence of an assessment approach that is consistent with the broadly supported principle of making use of all students’ linguistic resources for the purpose of teaching and learning.
Article
Full-text available
The infant brain may be predisposed to identify perceptually salient cues that are common to both signed and spoken languages. Recent theory based on spoken languages has advanced sonority as one of these potential language acquisition cues. Using a preferential looking paradigm with an infrared eye tracker, we explored visual attention of hearing 6- and 12-month-olds with no sign language experience as they watched fingerspelling stimuli that either conformed to high sonority (well-formed) or low sonority (ill-formed) values, which are relevant to syllabic structure in signed language. Younger babies showed highly significant looking preferences for well-formed, high sonority fingerspelling, while older babies showed no preference for either fingerspelling variant, despite showing a strong preference in a control condition. The present findings suggest babies possess a sensitivity to specific sonority-based contrastive cues at the core of human language structure that is subject to perceptual narrowing, irrespective of language modality (visual or auditory), shedding new light on universals of early language learning.
Article
Full-text available
Congenitally deaf individuals exhibit enhanced visuospatial abilities relative to normally hearing individuals. An early example is the increased sensitivity of deaf signers to stimuli in the visual periphery (Neville and Lawson, 1987a). While these enhancements are robust and extend across a number of visual and spatial skills, they seem not to extend to other domains which could potentially build on these enhancements. For example, congenitally deaf children, in the absence of adequate language exposure and acquisition, do not develop typical social cognition skills as measured by traditional Theory of Mind tasks. These delays/deficits occur despite their presumed lifetime use of visuo-perceptual abilities to infer the intentions and behaviors of others (e.g., Pyers and Senghas, 2009; O’Reilly et al., 2014). In a series of studies, we explore the limits on the plasticity of visually based socio-cognitive abilities, from perspective taking to Theory of Mind/False Belief, in rarely studied individuals: deaf adults who have not acquired a conventional language (Homesigners). We compared Homesigners’ performance to that of two other understudied groups in the same culture: Deaf signers of an emerging language (Cohort 1 of Nicaraguan Sign Language), and hearing speakers of Spanish with minimal schooling. We found that homesigners performed equivalently to both comparison groups with respect to several visual socio-cognitive abilities: Perspective Taking (Levels 1 and 2), adapted from Masangkay et al. (1974), and the False Photograph task, adapted from Leslie and Thaiss (1992). However, a lifetime of visuo-perceptual experiences (observing the behavior and interactions of others) did not support success on False Belief tasks, even when linguistic demands were minimized. Participants in the comparison groups outperformed the Homesigners, but did not universally pass the False Belief tasks. Our results suggest that while some of the social development achievements of young typically developing children may be dissociable from their linguistic experiences, language and/or educational experiences clearly scaffolds the transition into False Belief understanding. The lack of experience using a shared language cannot be overcome, even with the benefit of many years of observing others’ behaviors and the potential neural reorganization and visuospatial enhancements resulting from deafness.
Article
Full-text available
This paper presents a critical examination of key concepts in the study of (signed and spoken) language and multimodality. It shows how shifts in conceptual understandings of language use, moving from bilingualism to multilingualism and (trans)languaging, have resulted in the revitalisation of the concept of language repertoires. We discuss key assumptions and analytical developments that have shaped the sociolinguistic study of signed and spoken language multilingualism as separate from different strands of multimodality studies. In most multimodality studies, researchers focus on participants using one named spoken language within broader embodied human action. Thus while attending to multimodal communication, they do not attend to multilingual communication. In translanguaging studies the opposite has happened: scholars have attended to multilingual communication without really paying attention to multimodality and simultaneity, and hierarchies within the simultaneous combination of resources. The (socio)linguistics of sign language has paid attention to multimodality but only very recently have started to focus on multilingual contexts where multiple sign and/or multiple spoken languages are used. There is currently little transaction between these areas of research. We argue that the lens of semiotic repertoires enables synergies to be identified and provides a holistic focus on action that is both multilingual and multimodal.
Article
Full-text available
Indicating verbs can be directed towards locations in space associated with their arguments. The primary debate about these verbs is whether this directionality is akin to grammatical agreement or whether it represents a fusion of both morphemic and gestural elements. To move the debate forward, more empirical evidence is needed. We consider linguistic and social factors in 1436 indicating verb tokens from the BSL Corpus. Results reveal that modification is not obligatory and that patient modification is conditioned by several factors such as constructed action. We argue that our results provide some support for the claim that indicating verbs represent a fusion of morphemic and gestural elements.
Article
Full-text available
PurposeThere is a need to better understand the epidemiological relationship between language development and psychiatric symptomatology. Language development can be particularly impacted by social factors—as seen in the developmental choices made for deaf children, which can create language deprivation. A possible mental health syndrome may be present in deaf patients with severe language deprivation. Methods Electronic databases were searched to identify publications focusing on language development and mental health in the deaf population. Screening of relevant publications narrowed the search results to 35 publications. ResultsAlthough there is very limited empirical evidence, there appears to be suggestions of a mental health syndrome by clinicians working with deaf patients. Possible features include language dysfluency, fund of knowledge deficits, and disruptions in thinking, mood, and/or behavior. Conclusion The clinical specialty of deaf mental health appears to be struggling with a clinically observed phenomenon that has yet to be empirically investigated and defined within the DSM. Descriptions of patients within the clinical setting suggest a language deprivation syndrome. Language development experiences have an epidemiological relationship with psychiatric outcomes in deaf people. This requires more empirical attention and has implications for other populations with behavioral health disparities as well.
Article
Full-text available
A long-standing belief is that sign language interferes with spoken language development in deaf children, despite a chronic lack of evidence supporting this belief. This deserves discussion as poor life outcomes continue to be seen in the deaf population. This commentary synthesizes research outcomes with signing and non-signing children and highlights fully accessible language as a protective factor for healthy development. Brain changes associated with language deprivation may be misrepresented as sign language interfering with spoken language outcomes of cochlear implants. This may lead to professionals and organizations advocating for preventing sign language exposure before implantation and spreading misinformation. The existence of one-time-sensitive-language acquisition window means a strong possibility of permanent brain changes when spoken language is not fully accessible to the deaf child and sign language exposure is delayed, as is often standard practice. There is no empirical evidence for the harm of sign language exposure but there is some evidence for its benefits, and there is growing evidence that lack of language access has negative implications. This includes cognitive delays, mental health difficulties, lower quality of life, higher trauma, and limited health literacy. Claims of cochlear implant- and spoken language-only approaches being more effective than sign language-inclusive approaches are not empirically supported. Cochlear implants are an unreliable standalone first-language intervention for deaf children. Priorities of deaf child development should focus on healthy growth of all developmental domains through a fully-accessible first language foundation such as sign language, rather than auditory deprivation and speech skills.
Article
Full-text available
Language emergence describes moments in historical time when nonlinguistic systems become linguistic. Because language can be invented de novo in the manual modality, this offers insight into the emergence of language in ways that the oral modality cannot. Here we focus on homesign, gestures developed by deaf individuals who cannot acquire spoken language and have not been exposed to sign language. We contrast homesign with (a) gestures that hearing individuals produce when they speak, as these cospeech gestures are a potential source of input to homesigners, and (b) established sign languages, as these codified systems display the linguistic structure that homesign has the potential to assume. We find that the manual modality takes on linguistic properties, even in the hands of a child not exposed to a language model. But it grows into full-blown language only with the support of a community that transmits the system to the next generation.*
Article
Full-text available
A wide range of linguistic phenomena contribute to our understanding of the architecture of the human linguistic system. In this paper we present a proposal dubbed Language Synthesis to capture bilingual phenomena including code-switching and ‘transfer’ as automatic consequences of the addition of a second language, using basic concepts of Minimalism and Distributed Morphology. Bimodal bilinguals, who use a sign language and a spoken language, provide a new type of evidence regarding possible bilingual phenomena, namely code-blending, the simultaneous production of (aspects of) a message in both speech and sign. We argue that code-blending also follows naturally once a second articulatory interface is added to the model. Several different types of code-blending are discussed in connection to the predictions of the Synthesis model. Our primary data come from children developing as bimodal bilinguals, but our proposal is intended to capture a wide range of bilingual effects across any language pair.
Article
Full-text available
Failing to acquire language in early childhood because of language deprivation is a rare and exceptional event, except in one population. Deaf children who grow up without access to indirect language through listening, speech-reading, or sign language experience language deprivation. Studies of Deaf adults have revealed that late acquisition of sign language is associated with lasting deficits. However, much remains unknown about language deprivation in Deaf children, allowing myths and misunderstandings regarding sign language to flourish. To fill this gap, we examined signing ability in a large naturalistic sample of Deaf children attending schools for the Deaf where American Sign Language (ASL) is used by peers and teachers. Ability in ASL was measured using a syntactic judgment test and language-based analogical reasoning test, which are two sub-tests of the ASL Assessment Inventory. The influence of two age-related variables were examined: whether or not ASL was acquired from birth in the home from one or more Deaf parents, and the age of entry to the school for the Deaf. Note that for non-native signers, this latter variable is often the age of first systematic exposure to ASL. Both of these types of age-dependent language experiences influenced subsequent signing ability. Scores on the two tasks declined with increasing age of school entry. The influence of age of starting school was not linear. Test scores were generally lower for Deaf children who entered the school of assessment after the age of 12. The positive influence of signing from birth was found for students at all ages tested (7;6–18;5 years old) and for children of all age-of-entry groupings. Our results reflect a continuum of outcomes which show that experience with language is a continuous variable that is sensitive to maturational age.
Article
Full-text available
The primary goal of this dissertation is to investigate the relationship between Universal Grammar and the properties that Universal Grammar constrains, by investigating how language is created/acquired. The framework proposed in this dissertation provides us with tools for predicting what will and will not appear in linguistic systems of homesigners, late learners of a first language, and native signers/speakers of a given language. New data presented from the spontaneous production and experimental studies of Brazilian homesigners, late learners and native signers of Libras (Brazilian signed language) support the proposal with regards to the strength of rootedness of recursion, merge, hierarchical structural dependency, word order, and topic. If a particular property of language is ‘strongly rooted’, this indicates a high degree of innately specified guidance specifically for language development. Also, there are some properties that are constrained by UG, but with possible options, which are considered ‘somewhat rooted’ in my framework. The studies described in this thesis test hypotheses using elicited production, spontaneous production, and comprehension involving aspects of language, which fall into the categories of ‘strongly rooted’ and ‘somewhat rooted’ properties. The findings provide support for merge, recursion, and hierarchical structural dependency as ‘strongly rooted’ properties ‘Somewhat rooted’ properties, in the form of word order and topic, were also supported by the findings from the experiments with the participants. The proposed framework in this thesis sets the stage for future hypothesis-driven research on language development and language creation.
Article
Full-text available
The incidence of sensorineural hearing loss ranges from 1 to 3 per 1000 live births in term healthy neonates, and 2-4 per 100 in high-risk infants, a ten-fold increase. Early identification and intervention with hearing augmentation within 6 months yields optimal effect. If undetected and without treatment, significant hearing impairment may negatively impact speech development and lead to disorders in psychological and mental behaviors.Hearing screening programs in newborns enable detection of hearing impairment in the first days after birth. Programs to identify hearing deficit have significantly improved over the 2 decades, and their implementation continues to grow throughout the world. Initially based on risk factors, these programs identified only 50-75% of infants with hearing loss. Current recommendations are to conduct universal hearing screening in all infants. Techniques used primarily include automated auditory brainstem responses and otoacoustic emissions, that provide noninvasive recordings of physiologic auditory activity and are easily performed in neonates and infantsThe aim of this review is to present the objectives, benefits and results of newborn hearing screening programs including the pros and cons of universal versus selective screening. A brief history and the anticipated future development of these programs will also be discussed.
Article
Full-text available
Deaf children are often described as having difficulty with executive function (EF), often manifesting in behavioral problems. Some researchers view these problems as a consequence of auditory deprivation; however, the behavioral problems observed in previous studies may not be due to deafness but to some other factor, such as lack of early language exposure. Here, we distinguish these accounts by using the BRIEF EF parent report questionnaire to test for behavioral problems in a group of Deaf children from Deaf families, who have a history of auditory but not language deprivation. For these children, the auditory deprivation hypothesis predicts behavioral impairments; the language deprivation hypothesis predicts no group differences in behavioral control. Results indicated that scores among the Deaf native signers (n = 42) were age-appropriate and similar to scores among the typically developing hearing sample (n = 45). These findings are most consistent with the language deprivation hypothesis, and provide a foundation for continued research on outcomes of children with early exposure to sign language.
Article
Sign languages are frequently described as having three verb classes. One, ‘agreeing’ verbs, indicates the person/number of its subject and object by modification of the beginning and ending locations of the verb. The second, ‘spatial’ verbs, makes a similar appearing modification of verb movement to represent the source and goal locations of the theme of a verb of motion. The third class, ‘plain’ verbs, is characterized as having neither of these types of modulations. A number of researchers have proposed accounts that collapse all of these types, or the person-agreeing and spatial verbs. Here we present evidence from late learners of American Sign Language and from the emergence of new sign languages that person agreement and locative agreement have a different status in these conditions, and we claim their analysis should be kept distinct, at least in certain ways.
Article
Vocabulary is a critical early marker of language development. The MacArthur Bates Communicative Development Inventory has been adapted to dozens of languages, and provides a bird’s-eye view of children’s early vocabularies which can be informative for both research and clinical purposes. We present an update to the American Sign Language Communicative Development Inventory (the ASL-CDI 2.0, https://www.aslcdi.org), a normed assessment of early ASL vocabulary that can be widely administered online by individuals with no formal training in sign language linguistics. The ASL-CDI 2.0 includes receptive and expressive vocabulary, and a Gestures and Phrases section; it also introduces an online interface that presents ASL signs as videos. We validated the ASL-CDI 2.0 with expressive and receptive in-person tasks administered to a subset of participants. The norming sample presented here consists of 120 deaf children (ages 9 to 73 months) with deaf parents. We present an analysis of the measurement properties of the ASL-CDI 2.0. Vocabulary increases with age, as expected. We see an early noun bias that shifts with age, and a lag between receptive and expressive vocabulary. We present these findings with indications for how the ASL-CDI 2.0 may be used in a range of clinical and research settings
Article
One of the key questions in the study of human language acquisition is the extent to which the development of neural processing networks for different components of language are modulated by exposure to linguistic stimuli. Sign languages offer a unique perspective on this issue, because prelingually Deaf children who receive access to complex linguistic input later in life provide a window into brain maturation in the absence of language, and subsequent neuroplasticity of neurolinguistic networks during late language learning. While the duration of sensitive periods of acquisition of linguistic subsystems (sound, vocabulary, and syntactic structure) is well established on the basis of L2 acquisition in spoken language, for sign languages, the relative timelines for development of neural processing networks for linguistic sub-domains are unknown. We examined neural responses of a group of Deaf signers who received access to signed input at varying ages to three linguistic phenomena at the levels of classifier signs, syntactic structure, and information structure. The amplitude of the N400 response to the marked word order condition negatively correlated with the age of acquisition for syntax and information structure, indicating increased cognitive load in these conditions. Additionally, the combination of behavioral and neural data suggested that late learners preferentially relied on classifiers over word order for meaning extraction. This suggests that late acquisition of sign language significantly increases cognitive load during analysis of syntax and information structure, but not word-level meaning.
Article
Sign language use in the (re)habilitation of children with cochlear implants (CIs) remains a controversial issue. Concerns that signing impedes spoken language development are based on research comparing children exposed to spoken and signed language (bilinguals) to children exposed only to speech (monolinguals), although abundant research demonstrates that bilinguals and monolinguals differ in language development. We control for bilingualism effects by comparing bimodal bilingual (signing-speaking) children with CIs (BB-CI) to those with typical hearing (BB-TH). Each child had at least one Deaf parent and was exposed to ASL from birth. The BB-THs were exposed to English from birth by hearing family members, while the BB-CIs began English exposure after cochlear implantation around 22-months-of-age. Elicited speech samples were analyzed for accuracy of English grammatical morpheme production. Although there was a trend toward lower overall accuracy in the BB-CIs, this seemed driven by increased omission of the plural -s, suggesting an exaggerated role of perceptual salience in this group. Errors of commission were rare in both groups. Because both groups were bimodal bilinguals, trends toward group differences were likely caused by delayed exposure to spoken language or hearing through a CI, rather than sign language exposure.
Article
Gaze following plays a role in parent–infant communication and is a key mechanism by which infants acquire information about the world from social input. Gaze following in Deaf infants has been understudied. Twelve Deaf infants of Deaf parents (DoD) who had native exposure to American Sign Language (ASL) were gender‐matched and age‐matched (±7 days) to 60 spoken‐language hearing control infants. Results showed that the DoD infants had significantly higher gaze‐following scores than the hearing infants. We hypothesize that in the absence of auditory input, and with sup‐port from ASL‐fluent Deaf parents, infants become attuned to visual‐communicative signals from other people, which engenders increased gaze following. These findings underscore the need to revise the ‘deficit model’ of deafness. Deaf infants immersed in natural sign language from birth are better at understanding the signals and identifying the referential meaning of adults’ gaze behavior compared to hearing infants not exposed to sign language. Broader implications for theories of social‐cognitive development are discussed. This article is protected by copyright. All rights reserved.
Article
Lexical iconicity-signs or words that resemble their meaning-is overrepresented in children's early vocabularies. Embodied theories of language acquisition predict that symbols are more learnable when they are grounded in a child's firsthand experiences. As such, pantomimic iconic signs, which use the signer's body to represent a body, might be more readily learned than other types of iconic signs. Alternatively, the structure mapping theory of iconicity predicts that learners are sensitive to the amount of overlap between form and meaning. In this exploratory study of early vocabulary development in American Sign Language (ASL), we asked whether type of iconicity predicts sign acquisition above and beyond degree of iconicity. We also controlled for concreteness and relevance to babies, two possible confounding factors. Highly concrete referents and concepts that are germane to babies may be amenable to iconic mappings. We reanalyzed a previously published set of ASL Communicative Development Inventory (CDI) reports from 58 deaf children learning ASL from their deaf parents (Anderson & Reilly, 2002). Pantomimic signs were more iconic than other types of iconic signs (perceptual, both pantomimic and perceptual, or arbitrary), but type of iconicity had no effect on acquisition. Children may not make use of the special status of pantomimic elements of signs. Their vocabularies are, however, shaped by degree of iconicity, which aligns with a structure mapping theory of iconicity, though other explanations are also compatible (e.g., iconicity in child-directed signing). Previously demonstrated effects of type of iconicity may be an artifact of the increased degree of iconicity among pantomimic signs. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Article
Cochlear implants (CIs) are a routine treatment for children identified with a qualifying hearing loss. The CI, however, must be accompanied by a long-term and intense auditory training regimen in order to possibly acquire spoken language with the device. This research investigates families’ experiences when they opted for the CI and undertook the task of auditory training, but the child failed to achieve what might be clinically considered “success” – the ability to function solely using spoken language. Using a science and technology studies informed approach that places the CI within a complex sociotechnical system, this research shows the uncertain trajectory of the CI, as well as the contingency of the very notions of success and failure. To do so, data from in-depth interviews with a diverse sample of parents (n = 11) were collected. Results show the shifting definitions of failure and success within families, as well as suggest areas for further exploration regarding clinical practice and pediatric CIs. First, professionals’ messaging often conveyed to parents a belief in the infallibility of the CI, this potentially caused “soft failure” to go undetected and unmitigated. Second, speech assessments used in clinical measurements of outcomes did not capture a holistic understanding of a child's identity and social integration, leaving out an important component for consideration of what a ‘good outcome’ is. Third, minority parents experience structural racism and clinical attitudes that may render “failure” more likely to be identified and expected in these children, an individualizing process that allows structural failures to go uncritiqued.
Article
Deaf and Hard of Hearing (DHH) children need to master at least one language (spoken or signed) to reach their full potential. Providing access to a natural sign language supports this goal. Despite evidence that natural sign languages are beneficial to DHH children, many researchers and practitioners advise families to focus exclusively on spoken language. We critique the Pediatrics article ‘Early Sign Language Exposure and Cochlear Implants’ (Geers et al., 2017) as an example of research that makes unsupported claims against the inclusion of natural sign languages. We refute claims that (1) there are harmful effects of sign language and (2) that listening and spoken language are necessary for optimal development of deaf children. While practical challenges remain (and are discussed) for providing a sign language-rich environment, research evidence suggests that such challenges are worth tackling in light of natural sign languages providing a host of benefits for DHH children – especially in the prevention and reduction of language deprivation.
Chapter
Limited choices exist for assessing the signed language development of deaf and hard of hearing children. Over the past 30 years, the American Sign Language Assessment Instrument (ASLAI) has been one of the top choices for norm-referenced assessment of deaf and hard of hearing children who use American Sign Language. Signed language assessments can also be used to evaluate the effects of a phenomenon known as language deprivation, which tends to affect deaf children. They can also measure the effects of impoverished and idiosyncratic nonstandard signs and grammar used by educators of the deaf and professionals who serve the Deaf community. This chapter discusses what was learned while developing the ASLAI and provides guidelines for educators and researchers of the deaf who seek to develop their own signed language assessments.
Article
This autobiographical article, which began as an interview, reports some reflections by Lila Gleitman on the development of her thinking and her research¤mdash¤in concert with a host of esteemed collaborators over the years¤mdash¤on issues of language and mind, focusing on how language is acquired. Gleitman entered the field of linguistics as a student of Zellig Harris, and learned firsthand of Noam Chomsky's early work. She chose the psychological perspective, later helping to found the field of cognitive science; and with her husband and long-term collaborator, Henry Gleitman, for over 50 years fostered a continuing research community aimed at answering questions such as: When language input to the child is restricted, what is left to explain language acquisition? The studies reported here find that argument structure encoded in the syntax is key (syntactic bootstrapping) and that children learn word meaning in epiphanies (propose but verify).
Article
Previous studies suggest that age of acquisition affects the outcomes of learning, especially at the morphosyntactic level. Unknown is how syntactic development is affected by increased cognitive maturity and delayed language onset. The current paper studied the early syntactic development of adolescent first language learners by examining word order patterns in American Sign Language (ASL). ASL uses a basic Subject–Verb–Object order, but also employs multiple word order variations. Child learners produce variable word order at the initial stage of acquisition, but later primarily produce canonical word order. We asked whether adolescent first language learners acquire ASL word order in a fashion parallel to child learners. We analyzed word order preference in spontaneous language samples from four adolescent L1 learners collected longitudinally from 12 months to six years of ASL exposure. Our results suggest that adolescent L1 learners go through stages similar to child native learners, although this process also appears to be prolonged.
Article
Research interest in heritage speakers and their patterns of bilingual development has grown substantially over the last decade, prompting sign language researchers to consider how the concepts of heritage language and heritage speakers apply in the Deaf community. This overview builds on previous proposals that ASL and other natural sign languages qualify as heritage languages for many individuals raised in Deaf, signing families. Specifically, we submit that Codas and Deaf cochlear implant users from Deaf families (DDCI) are heritage signers, parallel to heritage speakers in spoken language communities. We support this proposal by pointing out developmental patterns that are similar across children who are bilingual in a minority home language and a dominant majority language, regardless of modality. This overview also addresses the complex challenge of determining whether unique patterns displayed by heritage speakers/signers in their home language reflect incomplete acquisition, acquisition followed by attrition, or divergent acquisition.The themes summarized in this article serve as an introduction to subsequent papers in this special issue on heritage signers.
Article
In recent years, normed signed language assessments have become a useful tool for researchers, practitioners, and advocates. Nevertheless, there are limitations in their application, particularly for the diagnosis of language disorders, and learning disabilities. Here, we discuss some of the available normed, signed language assessments and some of their limitations. We have also provided information related to practices that should lead to improvement in the quality of signed language assessments.
Article
The extent to which development of the brain language system is modulated by the temporal onset of linguistic experience relative to post-natal brain maturation is unknown. This crucial question cannot be investigated with the hearing population because spoken language is ubiquitous in the environment of newborns. Deafness blocks infants' language experience in a spoken form, and in a signed form when it is absent from the environment. Using anatomically constrained magnetoencephalography, aMEG, we neuroimaged lexico-semantic processing in a deaf adult whose linguistic experience began in young adulthood. Despite using language for 30 years after initially learning it, this individual exhibited limited neural response in the perisylvian language areas to signed words during the 300-400 ms temporal window, suggesting that the brain language system requires linguistic experience during brain growth to achieve functionality. The present case study primarily exhibited neural activations in response to signed words in dorsolateral superior parietal and occipital areas bilaterally, replicating the neural patterns exhibited by two previously case studies who matured without language until early adolescence (Ferjan Ramirez N, Leonard MK, Torres C, Hatrak M, Halgren E, Mayberry RI. 2014). The dorsal pathway appears to assume the task of processing words when the brain matures without experiencing the form-meaning network of a language.
Article
The hypothesis that children surpass adults in long-term second-language proficiency is accepted as evidence for a critical period for language. However, the scope and nature of a critical period for language has been the subject of considerable debate. The controversy centers on whether the age-related decline in ultimate second-language proficiency is evidence for a critical period or something else. Here we argue that age-onset effects for first vs. second language outcome are largely different. We show this by examining psycholinguistic studies of ultimate attainment in L2 vs. L1 learners, longitudinal studies of adolescent L1 acquisition, and neurolinguistic studies of late L2 and L1 learners. This research indicates that L1 acquisition arises from post-natal brain development interacting with environmental linguistic experience. By contrast, L2 learning after early childhood is scaffolded by prior childhood L1 acquisition, both linguistically and neurally, making it a less clear test of the critical period for language.
Article
The article discusses the importance of sociohistorical context which is the foundation of variation studies in sociolinguistics. The studies on variation in spoken and signed languages are reviewed with the discussion of geographical and social aspects which are treated as external factors in the formation and maintenance of dialects and those factors often have historical roots. The Black ASL project is reviewed as a case with racial segregation and educational policies as part of the sociohistorical factors in the emergence of Black ASL.
Article
Iconic mappings between words and their meanings are far more prevalent than once estimated and seem to support children’s acquisition of new words, spoken or signed. We asked whether iconicity’s prevalence in sign language overshadows two other factors known to support the acquisition of spoken vocabulary: neighborhood density (the number of lexical items phonologically similar to the target) and lexical frequency. Using mixed-effects logistic regressions, we reanalyzed 58 parental reports of native-signing deaf children’s productive acquisition of 332 signs in American Sign Language (ASL; Anderson & Reilly, 2002) and found that iconicity, neighborhood density, and lexical frequency independently facilitated vocabulary acquisition. Despite differences in iconicity and phonological structure between signed and spoken language, signing children, like children learning a spoken language, track statistical information about lexical items and their phonological properties and leverage this information to expand their vocabulary.
Chapter
The study of sign language acquisition has revealed important insights regarding the acquisition of language in the visual modality, the impact of delayed first-language exposure on language ability, and the relationship between language and cognitive processes. Unique challenges arise in studying sign language acquisition due to the low incidence and heterogeneity of the population and the need for inclusion in all aspects of the research of highly skilled native and near-native language users who are deaf. Despite these challenges, a range of methodological approaches have been applied to sign language acquisition, including longitudinal and cross-sectional sampling of the population, case studies, adaptation of assessment instruments, standardized measures, analyses of naturalistic language, and elicited language samples. Through these methods, researchers are able to conduct rigorous studies whose findings have made invaluable contributions to theories of language acquisition and development in a number of sign languages and populations.
Article
Sign language anaphora is often realized very differently from its spoken language counterpart. In simple cases, an antecedent is associated with a position or “locus” in signing space, and an anaphoric link is obtained by pointing toward that locus to recover its semantic value. This mechanism may sometimes be an overt realization of coindexation in formal syntax and semantics. I discuss two kinds of insights that sign language research can bring to the foundations of anaphora. First, in some cases the overt nature of indices in sign language allows one to bring overt evidence to bear on classic debates in semantics. I consider two: the availability of situation-denoting variables in natural language and the availability of binding without c-command. Second, in some cases sign language pronouns raise new challenges for formal semantics. Loci may function simultaneously as formal variables and as simplified depictions of what they denote, requiring the construction of a formal semantics with iconicity to analyze their properties.
Article
This review addresses several situations of language learning to make concrete the issue of fairness—and justice—that arises in designing assessments. First, I discuss the implications of dialect variation in American English, asking how assessment has taken dialect into consideration. Second, I address the question of how to assess the distributed knowledge of bilingual or dual-language learners. The evaluation of the language skills of children growing up in poverty asks whether the current focus on the quantity of caregiver input is misplaced. Third, I address a special case in which the young speakers of a minority language, Romani, are judged to be unfit for schooling because they fail tests in the state language. Finally, I examine the difficult issue of language assessments in countries with multiple official languages and few resources. In each of these areas, the involvement and expertise of linguists are essential for knowing how the grammar works and what might be important to assess.
Article
Students acquiring American Sign Language (ASL) as a second language (L2) struggle with fingerspelling comprehension more than skilled signers. These L2 learners might be attempting to perceive and comprehend fingerspelling in a way that is different from native signers, which could negatively impact their ability to comprehend fingerspelling. This could be related to improper weighting of cues that skilled signers use to identify fingerspelled utterances. Improper cue-weighting in spoken language learners has been ameliorated through explicit phonetic instruction, but this method of teaching has yet to be applied to learners of a language in a new modality (M2 learners). The present study assesses this prospect. Eighteen university students in their third-semester of ASL were divided into two groups; one received explicit phonetic training, and the other received implicit training on fingerspelling. Data from a fingerspelling comprehension test, with two experimental conditions and a control, were submitted to a mixed effects logistic regression. This revealed a significant improvement from the pre-test to post-test by students who received the explicit training. Results indicate that even short exposure to explicit phonetic instruction significantly improves participants’ ability to understand fingerspelling, suggesting that ASL curricula should include this type of instruction to improve students’ fingerspelling comprehension abilities.
Article
A sensitive period for first language acquisition has been proposed and previously supported primarily by case studies of social isolates and studies with Deaf adults who were exposed to American Sign Language (ASL) during mid- to late-childhood. Although informative, case studies with hearing, social isolates are confounded by physical abuse experienced by the children. Studies with Deaf adults do not show the development of language acquisition under the condition of delayed input. There is now new evidence for sensitive period effects on first language acquisition from two unrelated children, MEI and CAL. MEI and CAL were not exposed to a first language until approximately 6 years of age. There is no history of physical abuse—just a misdiagnosis of mental retardation instead of deafness. MEI and CAL, once exposed to language, were immersed in ASL. ^ The results of filming MEI and CAL for 3 1/2 years, from the beginning of their language acquisition process, suggest that sensitive period effects are seen with at least one specific aspect of language—the formal syntactic features (Chomsky 1995). Formal syntactic features are found in different domains of language, including verb agreement, word-order changing mechanisms, and null referents. Analyses of MEI's and CAL's naturalistic language production data, along with preliminary experimental results, reveal difficulties with precisely these domains. MEI and CAL have a higher overall percentage of errors per sample than the two native-signing Deaf comparison children. MEI and CAL made most of their errors with agreeing verbs. This class of verbs is the only one that marks syntactic features in ASL. MEI and CAL attempted fewer utterances with word order variations, suggesting a difficulty with the formal features that trigger some of the word order change mechanisms. Finally, MEI and CAL produced utterances with incorrectly null referents more often than the native signers, again implicating a difficulty with the formal features needed to trigger the syntactic licensing of null elements. ^ The results from this present study, combined with the results from the studies with Deaf, adult late-learners, suggest that sensitive period effects exist, are specific, and are long-lasting.
Article
Explaining children’s nonadult interpretations of sentences with quantifiers has been the objective of extensive research for more than 50 years. This article reviews four areas of research, each of which began with the observation that children and adults respond differently to sentences with quantifiers. The observed differences have been subject to considerable debate, often drawing upon linguistic theory for answers and sometimes resulting in changes to the theory. This article begins by discussing children’s comprehension of sentences with pronouns with quantificational versus referential antecedents. The next topic is children’s nonadult responses to sentences with quantifiers and negation. The third topic is children’s analysis of scope phenomena. I conclude with a discussion of children’s understanding of the focus adverb only, which is used to expose some common properties of historically distinct languages. Progress in each of these four areas has revealed children’s deep understanding of the basic meanings of quantifiers and how quantifiers interact with other logical expressions. I conclude that children’s nonadult interpretations of quantifiers are consistent with the theory of Universal Grammar. Expected final online publication date for the Annual Review of Linguistics Volume 3 is January 14, 2017. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
Article
This review introduces and compares syntactic structures in a variety of sign languages. I first examine ways in which sign languages function like spoken languages, and ways in which they differ. I then briefly discuss what sign languages have in common in the syntactic realm; the rest of the article focuses on how they can differ. Because the level of the simple sentence has been documented extensively, this review emphasizes complex sentences, such as sentential complementation, relative clauses, adverbial clauses, embedded questions, and conditionals. Expected final online publication date for the Annual Review of Linguistics Volume 3 is January 14, 2017. Please see http://www.annualreviews.org/catalog/pubdates.aspx for revised estimates.
Article
In this study we followed the characteristics and use of code-mixing by eight KODAs – hearing children of Deaf parents – from the age of 12 to 36 months. The children's interaction was video-recorded twice a year during three different play sessions: with their Deaf parent, with the Deaf parent and a hearing adult, and with the hearing adult alone. Additionally, data were collected on the children's overall language development in both sign language and spoken language. Our results showed that the children preferred to produce code-blends – simultaneous production of semantically congruent signs and words – in a way that was in accordance with the morphosyntactic structure of both languages being acquired. A Deaf parent as the interlocutor increased the number of and affected the type of code-blended utterances. These findings suggest that code-mixing in young bimodal bilingual KODA children can be highly systematic and synchronised in nature and can indicate pragmatic development.
Article
The focus of the paper is a phenomenon well documented in both monolingual and bilingual English acquisition: argument omission. Previous studies have shown that bilinguals acquiring a null and a non-null argument language simultaneously tend to exhibit unidirectional cross-language interaction effects — the null argument language remains unaffected but over-suppliance of overt elements in the null argument language is observed. Here subject and object omission in both ASL (null argument) and English (non-null argument) of young ASL-English bilinguals is examined. Results demonstrate that in spontaneous English production, ASL-English bilinguals omit subjects and objects to a higher rate, for longer, and in unexpected environments when compared with English monolinguals and bilinguals; no effect on ASL is observed. Findings also show that the children differentiate between their two languages — rates of argument omission in English are different during ASL vs. English target sessions differ. Implications for the general theory of bilingual effects are offered.
Article
There has been a scarcity of studies exploring the influence of students’ American Sign Language (ASL) proficiency on their academic achievement in ASL/English bilingual programs. The aim of this study was to determine the effects of ASL proficiency on reading comprehension skills and academic achievement of 85 deaf or hard-of-hearing signing students. Two subgroups, differing in ASL proficiency, were compared on the Northwest Evaluation Association Measures of Academic Progress and the reading comprehension subtest of the Stanford Achievement Test, 10th edition. Findings suggested that students highly proficient in ASL outperformed their less proficient peers in nationally standardized measures of reading comprehension, English language use, and mathematics. Moreover, a regression model consisting of 5 predictors including variables regarding education, hearing devices, and secondary disabilities as well as ASL proficiency and home language showed that ASL proficiency was the single variable significantly predicting results on all outcome measures. This study calls for a paradigm shift in thinking about deaf education by focusing on characteristics shared among successful deaf signing readers, specifically ASL fluency.
Book
Humans' first languages may have been expressed through sign. Today, sign languages have been found around the world, including communities that do not have access to education or literacy. In addition to serving as a primary medium of communication for deaf communities, they have become among the most popular choices for second language study by hearing students. The status of sign languages as complex and complete languages that are clearly the linguistic "equal" of spoken languages is no longer questioned. Research on the characteristics of visual languages has blossomed since the 1960s, and careful study of deaf children's development of sign language skills is pursued to obtain information to promote deaf children's development. Equally important, the study of how children learn sign language provides excellent theoretical insights into how the human brain acquires and structures sign languages. In the same sense that cross-linguistic research has led to a better understanding of how language affects development, cross-modal research allows us to study the acquisition of language in the absence of a spoken phonology. This book provides cogent summaries of what is known about early gestural development, interactive processes adapted to visual communication, and the processes of semantic, syntactic and pragmatic development in sign. It addresses theoretical as well as applied questions, often with a focus on aspects of language which are (or perhaps are not) related to the modality of the language. © 2006 by Brenda Schick, Marc Marschark, and Patricia Elizabeth Spencer. All rights reserved.
Article
Now, Jack R. Gannon's original groundbreaking volume on Deaf history and culture is available once again. In Deaf Heritage: A Narrative History of Deaf America, Gannon brought together for the first time the story of the Deaf experience in America from a Deaf perspective. Recognizing the need to document the multifaceted history of this unique minority with its distinctive visual culture, he painstakingly gathered as much material as he could on Deaf American life. The result is a 17-chapter montage of artifacts and information that forms an utterly fascinating record from the early nineteenth century to the time of its original publication in 1981. Deaf Heritage tracks the development of the Deaf community both chronologically and by significant subjects. The initial chapter treats the critical topics of early attempts at deaf education, the impact of Deaf and Black deaf teachers, the establishment of schools for the deaf, and the founding of Gallaudet College. Individual chapters cover the 1880s through the 1970s, mixing milestones such as the birth of the National Association of the Deaf and the work of important figures, Deaf and hearing, with anecdotes about day-to-day deaf life. Other chapters single out important facets of Deaf culture: American Sign Language, Deaf Sports, Deaf artists, Deaf humor, and Deaf publications. The overall effect of this remarkable record, replete with archival photographs, tables, and lists of Deaf people's accomplishments, reveals the growth of a vibrant legacy singular in American history.