Article

« SLA2 »: Ou comment relier acquisition des langues secondes et acquisition des langues des signes

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Sign language interpreting (SLI) is a cognitively challenging task performed mostly by second language learners (i.e., not raised using a sign language as a home language). SLI students must first gain language fluency in a new visuospatial modality and then move between spoken and signed modalities as they interpret. As a result, many students plateau before reaching working fluency, and SLI training program drop-out rates are high. However, we know little about the requisite skills to become a successful interpreter: the few existing studies investigating SLI aptitude in terms of linguistic and cognitive skills lack baseline measures. Here we report a 3-year exploratory longitudinal skills assessments study with British Sign Language (BSL)-English SLI students at two universities (n = 33). Our aims were two-fold: first, to better understand the prerequisite skills that lead to successful SLI outcomes; second, to better understand how signing and interpreting skills impact other aspects of cognition. A battery of tasks was completed at four time points to assess skills, including but not limited to: multimodal and unimodal working memory, 2-dimensional and 3-dimensional mental rotation (MR), and English comprehension. Dependent measures were BSL and SLI course grades, BSL reproduction tests, and consecutive SLI tasks. Results reveal that initial BSL proficiency and 2D-MR were associated with selection for the degree program, while visuospatial working memory was linked to continuing with the program. 3D-MR improved throughout the degree, alongside some limited gains in auditory, visuospatial, and multimodal working memory tasks. Visuospatial working memory and MR were the skills closest associated with BSL and SLI outcomes, particularly those tasks involving sign language production, thus, highlighting the importance of cognition related to the visuospatial modality. These preliminary data will inform SLI training programs, from applicant selection to curriculum design.
Article
Full-text available
In second language research, the concept of cross-linguistic influence or transfer has frequently been used to describe the interaction between the first language (L1) and second language (L2) in the L2 acquisition process. However, less is known about the L2 acquisition of a sign language in general and specifically the differences in the acquisition process of L2M2 learners (learners learning a sign language for the first time) and L2M1 learners (signers learning another sign language) from a multimodal perspective. Our study explores the influence of modality knowledge on learning Swedish Sign Language through a descriptive analysis of the sign lexicon in narratives produced by L2M1 and L2M2 learners, respectively. A descriptive mixed-methods framework was used to analyze narratives of adult L2M1 (n = 9) and L2M2 learners (n = 15), with a focus on sign lexicon, i.e., use and distribution of the sign types such as lexical signs, depicting signs (classifier predicates), fingerspelling, pointing, and gestures. The number and distribution of the signs are later compared between the groups. In addition, a comparison with a control group consisting of L1 signers (n = 9) is provided. The results suggest that L2M2 learners exhibit cross-modal cross-linguistic transfer from Swedish (through higher usage of lexical signs and fingerspelling). L2M1 learners exhibits same-modal cross-linguistic transfer from L1 sign languages (through higher usage of depicting signs and use of signs from L1 sign language and international signs). The study suggests that it is harder for L2M2 learners to acquire the modality-specific lexicon, despite possible underlying gestural knowledge. Furthermore, the study suggests that L2M1 learners’ access to modality-specific knowledge, overlapping access to gestural knowledge and iconicity, facilitates faster L2 lexical acquisition, which is discussed from the perspective of linguistic relativity (including modality) and its role in sign L2 acquisition.
Article
Full-text available
A key challenge when learning language in naturalistic circumstances is to extract linguistic information from a continuous stream of speech. This study investigates the predictors of such implicit learning among adults exposed to a new language in a new modality (a sign language). Sign-naïve participants (N = 93; British English speakers) were shown a 4-min weather forecast in Swedish Sign Language. Subsequently, we tested their ability to recognise 22 target sign forms that had been viewed in the forecast, amongst 44 distractor signs that had not been viewed. The target items differed in their occurrence frequency in the forecast and in their degree of iconicity. The results revealed that both frequency and iconicity facilitated recognition of target signs cumulatively. The adult mechanism for language learning thus operates similarly on sign and spoken languages as regards frequency, but also exploits modality-salient properties, for example iconicity for sign languages. Individual differences in cognitive skills and language learning background did not predict recognition. The properties of the input thus influenced adults’ language learning abilities at first exposure more than individual differences.
Article
Full-text available
In recent years there has been a growing interest in sign second language acquisition (SSLA). However, research in this area is sparse. As signed and spoken languages are expressed in different modalities, there is a great potential for broadening our understanding of the mechanisms and the acquisition processes of learning a (second) language through SSLA research. In addition, the application of existing SLA knowledge to sign languages can bring new insights into the generalizability of SLA theories and descriptions, to see whether they hold true for sign languages. In this paper I give a brief overview of sign language and SSLA research, together with insights from the research on iconicity and gestures and its role for SSLA, including examples from my own studies on L2 signers. The paper concludes with a discussion of both the potential and challenges of combining sign language and SLA research, providing some notes towards directions for future research.
Article
Full-text available
This article deals with L2 acquisition of a sign language, examining in particular the use and acquisition of non-manual mouth actions performed by L2 learners of Swedish Sign Language. Based on longitudinal data from an L2 learner corpus, we describe the distribution, frequency, and spreading patterns of mouth actions in sixteen L2 learners at two time points. The data are compared with nine signers of an L1 control group. The results reveal some differences in the use of mouth actions between the groups. The results are specifically related to the category of mouthing borrowed from spoken Swedish. L2 signers show an increased use of mouthing compared to L1 signers. Conversely, L1 signers exhibit an increased use of reduced mouthing compared with L2 signers. We also observe an increase of adverbial mouth gestures within the L2 group. The results are discussed in relation to previous findings, and within the framework of cross-linguistic influence.
Article
Full-text available
This review addresses the question: How are signed languages learned by adult hearing learners? While there has been much research on second language learners of spoken languages, there has been far less work in signed languages. Comparing sign and spoken second language acquisition allows us to investigate whether learning patterns are general (across the visual and oral modalities) or specific (in only one of the modalities), and hence furthers our understanding of second‐language acquisition (SLA). The paper integrates current sign language learning research into the wider field of SLA by focussing on two areas: (1) Does ‘transfer’ occur between the spoken first language and signed second language and (2) What kind of learning patterns are the same across language modalities versus unique to each modality?
Article
Full-text available
Irish Sign Language uses a one-handed alphabet in which each fingerspelled letter has a unique combination of handshape, orientation, and, in a few cases, path movement. Each letter is used to represent a letter from the Latin alphabet (Battison, 1978; Wilcox, 1992). For ISL learners, fingerspelling is a strategy that is used to bridge lexical gaps, and so functions as an interlanguage mechanism, which we hypothesise is more prevalent for new learners (A-level learners in the Common European Framework of Reference for Languages (CEFR) (Council of Europe, 2001). Across 2018-19 we marked up a subset of data from the Second Language Acquisition Corpus (ISL-SLAC) for use of fingerspelling. Here, we document how these learners use fingerspelling, and explore the phonology of the fingerspelled items presented by M2L2 learners (handshape, location, movement and orientation), comparing to the production of native signers', drawn from the Signs of Ireland corpus. Results indicate that ISL learners make greater use of fingerspelling in the initial phases of acquiring the language, and that, over time, as they develop a robust lexical repertoire, they reduce the frequency of fingerspelling. Fingerspelling also provides a strategic interlanguage that can be reverted to when vocabulary is unknown.
Thesis
Full-text available
Doctoral thesis, defended june 19, 2020, publically available at: https://hdl.handle.net/11245.1/c89874a8-97c0-47a3-8bf3-5f138d350c48
Article
Full-text available
Little is known about how hearing adults learn sign languages. Our objective in this study was to investigate how learners of British Sign Language (BSL) produce narratives, and we focused in particular on viewpoint-taking. Twenty-three intermediate-level learners of BSL and 10 deaf native/early signers produced a narrative in BSL using the wordless picture book Frog, where are you? (Mayer, 1969). We selected specific episodes from part of the book that provided rich opportunities for shifting between different characters and taking on different viewpoints. We coded for details of story content, the frequency with which different viewpoints were used and how long those viewpoints were used for, and the numbers of articulators that were used simultaneously. We found that even though learners’ and deaf signers’ narratives did not differ in overall duration, learners’ narratives had less content. Learners used character viewpoint less frequently than deaf signers. Although learners spent just as long as deaf signers in character viewpoint, they spent longer than deaf signers in observer viewpoint. Together, these findings suggest that character viewpoint was harder than observer viewpoint for learners. Furthermore, learners were less skilled than deaf signers in using multiple articulators simultaneously. We conclude that challenges for learners of sign include taking character viewpoint when narrating a story and encoding information across multiple articulators simultaneously.
Chapter
Full-text available
Research interest in sign language L2 1 acquisition is growing, fueled by dramatic increases in sign language learning (Welles, 2004). Researchers ask to what extent typical L2 patterns apply to hearing students learning an L2 in a new modality, or M2 (second modality)-L2 learners. M2 acquisition may pose unique challenges not observed in typical (unimodal) L2 acquisition. At the same time, co-speech gestures and emblems could potentially be exploited to facilitate M2-L2 acquisition of sign language. Additionally, acquisition of a second signed language by individuals with a signed L1, or M1 (first modality)-L2 learners, provide further opportunity to test "typical" patterns of L2 acquisition that have been established almost exclusively on the basis of hearing spoken second-language acquisition. This chapter summarizes the small but growing literature on L2 sign acquisition for both M1 and M2 learners, exploring some of the intriguing research questions offered by L2 sign research.
Article
Full-text available
The sign languages of deaf communities and the gestures produced by hearing people are communicative systems that exploit the manual-visual modality as means of expression. Despite their striking differences they share the property of iconicity, understood as the direct relationship between a symbol and its referent. Here we investigate whether non-signing hearing adults exploit their implicit knowledge of gestures to bootstrap accurate understanding of the meaning of iconic signs they have never seen before. In Study 1 we show that for some concepts gestures exhibit systematic forms across participants, and share different degrees of form overlap with the signs for the same concepts (full, partial, and no overlap). In Study 2 we found that signs with stronger resemblance with signs are more accurately guessed and are assigned higher iconicity ratings by non-signers than signs with low overlap. In addition, when more people produced a systematic gesture resembling a sign, they assigned higher iconicity ratings to that sign. Furthermore, participants had a bias to assume that signs represent actions and not objects. The similarities between some signs and gestures could be explained by deaf signers and hearing gesturers sharing a conceptual substrate that is rooted in our embodied experiences with the world. The finding that gestural knowledge can ease the interpretation of the meaning of novel signs and predicts iconicity ratings is in line with embodied accounts of cognition and the influence of prior knowledge to acquire new schemas. Through these mechanisms we propose that iconic gestures that overlap in form with signs may serve as some type of 'manual cognates' that help non-signing adults to break into a new language at first exposure.
Article
Full-text available
Iconicity is often defined as the resemblance between a form and a given meaning, while transparency is defined as the ability to infer a given meaning based on the form. This study examined the influence of knowledge of American Sign Language (ASL) on the perceived iconicity of signs and the relationship between iconicity, transparency (correctly guessed signs), ‘perceived transparency’ (transparency ratings of the guesses), and ‘semantic potential’ (the diversity (H index) of guesses). Experiment 1 compared iconicity ratings by deaf ASL signers and hearing nonsigners for 991 signs from the ASL-LEX database. Signers and nonsigners’ ratings were highly correlated, however, the groups provided different iconicity ratings for subclasses of signs: nouns vs. verbs, handling vs. entity, and one- vs. two-handed signs. In Experiment 2, nonsigners guessed the meaning of 430 signs and rated them for how transparent their guessed meaning would be for others. Only 10% of guesses were correct. Iconicity ratings correlated with transparency (correct guesses), perceived transparency ratings, and semantic potential (H-index). Further, some iconic signs were perceived as non-transparent and vice versa. The study demonstrates that linguistic knowledge mediates perceived iconicity distinctly from gesture and highlights critical distinctions between iconicity, transparency (perceived and objective), and semantic potential.
Article
Full-text available
This article focuses on the similar approaches to, yet different contexts of legal recognition of sign languages in Sweden and Norway. We use examples from sign language documentation (both scientific and popular), legislation that mentions sign language, organization of implementation of sign language acquisition, and public discourse (as expressed by deaf associations’ periodicals from the 1970s until today), to discuss the status and ideologies of sign language, and how these have affected deaf education. The legal documents indicate that Norway has a stronger and more wide-reaching legislation, especially sign language acquisition rights, but the formal legal recognition of a sign language is not necessarily reflected in how people discuss the status of the sign language. Our analysis reveals that the countries’ sign languages have been subject to language shaming, defined as the enactment of linguistic subordination. The language shaming has not only been enacted by external actors, but has also come from within deaf communities. Our material indicates that language shaming has been more evident in the Norwegian Deaf community, while the Swedish Deaf community has been more active in using a “story of legislation” in the imagination and rhetoric about the Swedish deaf community and bilingual education. The similarities in legislation, but differences in deaf education, popular discourse and representation of the sign languages, reveal that looking at the level and scope of legal recognition of sign language in a country, only partially reflects the acceptance and status of sign language in general.
Article
Full-text available
Does language dominance modulate knowledge of case marking in Hindi-speaking bilinguals? Hindi is a split ergative language with a rich morphological case system. Subjects of transitive perfective predicates are marked with ergative case (-ne). Human specific direct objects, indirect objects, and dative subjects are marked with the particle -ko. We compared knowledge of case marking in Hindi–English bilinguals with different dominance patterns: 23 balanced bilinguals and two groups of bilinguals with Hindi as their weaker language: 24 L2 learners of Hindi with age of acquisition (AoA) of Hindi in adulthood and 26 Hindi heritage speakers with AoA of Hindi since birth in oral production and acceptability judgments. The balanced bilinguals outperformed the English-dominant bilinguals; the L2 learners and the heritage speakers, who showed similar lower command of the Hindi case marking system, with the exception of -ko marking as a function of specificity with direct objects. We consider how dominant language transfer, AoA of Hindi, and input factors may explain the acquisition and knowledge of morphology in Hindi as the weaker language.
Article
Full-text available
Bilingual children experience a rapid shift in language preference and input dominance from L1 to L2 upon entering kindergarten when regular contact with L2 starts. Though this change in dominance affects further L1 development, little is known about how various factors shape this. The present study examines the combined influence of different background factors including not only chronological age, age of onset of L2 (L2 AoO), and gender, but also various L1 input measures on L1 receptive and expressive lexical and morphological (case and verb inflections) development in Russian-German bilingual children. For lexical skills, we found a general strong impact of chronological age, gender, and input factors but a differential impact of L2 AoO. Only expressive lexical skills were influenced by language dominance. Morphological development was influenced in the following way: chronological age and gender were most relevant for the acquisition of verb inflection, whereas age, L1 use in the nuclear family and L2 AoO affected the acquisition of case on nouns. This pattern explains the findings of the second series of analyses of longitudinal data, which showed that case is more vulnerable than verb inflection to language attrition—or, taking another perspective—to heritage Russian grammar restructuring.
Conference Paper
Full-text available
This paper aims to present part of the project "From Speech to Sign-learning Swedish Sign Language as a second language" which include a learner corpus that is based on data produced by hearing adult L2 signers. The paper describes the design of corpus building and the collection of data for the Corpus in Swedish Sign Language as a Second Language (SSLC-L2). Another component of ongoing work is the creation of a specialized annotation scheme for SSLC-L2, one that differs somewhat from the annotation work in Swedish Sign Language Corpus (SSLC), where the data is based on performance by L1 signers. Also, we will account for and discuss the methodology used to annotate L2 structures.
Article
Full-text available
This study examines the role of language dominance (LD) on linguistic competence outcomes in two types of early bilinguals: (i) child L2 learners of Catalan (L1 Spanish-L2 Catalan and, (ii) child Spanish L2 learners (L1 Catalan-L2 Spanish). Most child L2 studies typically focus on the development of the languages during childhood and either focus on L1 development or L2 development. Typically, these child L2 learners are immersed in the second language. We capitalize on the unique situation in Catalonia, testing the Spanish and Catalan of both sets of bilinguals, where dominance in either Spanish or Catalan is possible. We examine the co-occurrence of Sentential Negation (SN) with a Negative Concord Item (NCI) in pre-verbal position (Catalan only) and Differential Object Marking (DOM) (Spanish only). The results show that remaining dominant in the L1 contributes to the maintenance of target-like behavior in the language.
Article
Full-text available
Previous research on reference tracking has revealed a tendency towards over-explicitness in second language (L2) learners. Only limited evidence exists that this trend extends to situations where the learner’s first and second languages do not share a sensory-motor modality. Using a story-telling paradigm, this study examined how hearing novice L2 learners accomplish reference tracking in American Sign Language (ASL), and whether they transfer strategies from gesture. Our results revealed limited evidence of over-explicitness. Instead there was an overall similarity in the L2 learners’ reference tracking to that of a native signer control group, even in the use of lexical nominals, pronouns and zero anaphora – areas where research on spoken L2 reference tracking predicts differences. Our data also revealed, however, that L2 learners have problems with the referential value of ASL classifiers, and with target-like use of zero anaphora from different verb types, as well as spatial modification. This suggests that over-explicitness occurs in the early stages of different modality L2 acquisition to a limited extent. We found no evidence of gestural transfer. Finally, we found that L2 learners reintroduce more than native signers, which could indicate that they, unlike native signers are not yet capable of utilizing the affordances of the visual modality to reference multiple entities simultaneously
Chapter
Full-text available
The practice of interpretation of sign languages dates back many, many years, though the practice is just now struggling to achieve the status of a profession — shifting from a more-or-less clinical focus to a more-or-less linguistic one. Research on sign languages, which is itself very recent, has convincingly demonstrated that at least some sign languages are indeed languages in the linguistic sense, thereby forcing us to expand our conceptions of the nature of language and to re-examine our approaches to the study of language. Experiments on the simultaneous interpretation of sign languages are contributing to our knowledge and understanding of language and communication in general as well as to the resolution of problems dealing specifically with sign language interpretation. These are the major points that we have gained from the presentations by Domingue and Ingram, Tweney, and Murphy. The relevance of their discussions of sign language interpretation to the general subject areas of language, interpretation, and communication is largely self-evident. Essentially, we are all saying that the interpretation of sign languages is an integral part of the general study of interpretation and that no description (practical or theoretical) of interpretation which fails to take account of sign language interpretation can be regarded as complete. I have set myself the task of demonstrating this point beyond any doubt. The papers by Domingue and Ingram, Tweney, and Murphy have called attention to a number of problems in interpretation of sign languages. My approach will be to explore some of these problems further in relation to language, interpretation and communication in general.
Article
Full-text available
DOWNLOAD: http://is.gd/arbicosys The notion that the form of a word bears an arbitrary relation to its meaning accounts only partly for the attested relations between form and meaning in the languages of the world. Recent research suggests a more textured view of vocabulary structure, in which arbitrariness is complemented by iconicity (aspects of form resemble aspects of meaning) and systematicity (statistical regularities in forms predict function). Experimental evidence suggests these form-to-meaning correspondences serve different functions in language processing, development, and communication: systematicity facilitates category learning by means of phonological cues, iconicity facilitates word learning and communication by means of perceptuomotor analogies, and arbitrariness facilitates meaning individuation through distinctive forms. Processes of cultural evolution help to explain how these competing motivations shape vocabulary structure.
Chapter
Full-text available
Traditionally, studies of second language (L21) acquisition have focused on the acquisition of spoken L2 by hearing learners, or the acquisition of spoken or written languages by deaf learners. L2 acquisition of sign languages has only recently become a topic of research, largely in response to a recent, dramatic increase in students, both hearing and deaf, interested in learning sign languages (Welles, 2004). As a function of this increase, the demand in instructional and assessment materials reliant on empirical evidence has grown as well (Ashton, Cagle, Brown Kurz, Newell Peterson, & Zinza, 2014). Researchers ask to what extent typical L2 patterns apply to hearing students who are learning an L2 in a new modality; we will refer to such students as M2 (second modality)-L2 learners. One might predict that language learning in a new modality poses unique challenges that are not observed in typical (unimodal) L2 acquisition. At the same time, we now know that hearing non-signers make extensive use of gestures and emblems alongside their spoken language, so we might ask what role this gestural experience plays in M2-L2 acquisition of sign language, and whether it can be exploited to facilitate acquisition. Researchers are also interested in understanding how individuals with a signed L1 acquire a second signed language; such M1 (first modality)-L2 learners provide another opportunity to test the “typical” patterns of L2 acquisition that have been established almost exclusively on the basis of spoken second language acquisition by hearing learners. This chapter summarizes the small but growing literature on L2 sign acquisition for both M1 and M2 learners, and explores some of the many intriguing research questions offered by L2 sign research.
Article
Full-text available
This study determined whether the long-range outcome of first-language acquisition, when the learning begins after early childhood, is similar to that of second-language acquisition. Subjects were 36 deaf adults who had contrasting histories of spoken and sign language acquisition. Twenty-seven subjects were born deaf and began to acquire American Sign Language (ASL) as a first language at ages ranging from infancy to late childhood. Nine other subjects were born with normal hearing, which they lost in late childhood; they subsequently acquired ASL as a second language (because they had acquired spoken English as a first language in early childhood). ASL sentence processing was measured by recall of long and complex sentences and short-term memory for signed digits. Subjects who acquired ASL as a second language after childhood outperformed those who acquired it as a first language at exactly the same age. In addition, the performance of the subjects who acquired ASL as a first language declined in association with increasing age of acquisition. Effects were most apparent for sentence processing skills related to lexical identification, grammatical acceptability, and memory for sentence meaning. No effects were found for skills related to fine-motor production and pattern segmentation.
Article
Full-text available
A study was conducted to examine production variability in American Sign Language (ASL) in order to gain insight into the development of motor control in a language produced in another modality. Production variability was characterized through the spatiotemporal index (STI), which represents production stability in whole utterances and is a function of variability in effector displacement waveforms (Smith et al., 1995). Motion capture apparatus was used to acquire wrist displacement data across a set of eight target signs embedded in carrier phrases. The STI values of Deaf signers and hearing learners at three different ASL experience levels were compared to determine whether production stability varied as a function of time spent acquiring ASL. We hypothesized that lower production stability as indexed by the STI would be evident for beginning ASL learners, indicating greater production variability, with variability decreasing as ASL language experience increased. As predicted, Deaf signers showed significantly lower STI values than the hearing learners, suggesting that stability of production is indeed characteristic of increased ASL use. The linear trend across experience levels of hearing learners was not statistically significant in all spatial dimensions, indicating that improvement in production stability across relatively short time scales was weak. This novel approach to characterizing production stability in ASL utterances has relevance for the identification of sign production disorders and for assessing L2 acquisition of sign languages.
Article
Full-text available
Aims and Objectives: Learning to control reference in narratives is a major step in becoming a speaker of a second language, including a signed language. Previous research describes the pragmatic and cognitive mechanisms that are used for reference control and it is clear that differences are apparent between first and second language speakers. However, some debate exists about the reasons for second language learners’ tendency for over-redundancy in reference forms especially in the use of pronouns. In this study we tested these proposed reasons for L2 differences. Methodology: Narratives by 11 native signers and 13 adult advanced-learners of Catalan sign language were analysed for person reference. Data: Analysis focused on forms for introduction, reintroduction and maintenance of characters. Findings: The results indicate both groups used reference forms according to information saliency principles in similar ways. Differences between the groups were in the use of pronominal signs, where the learners adopted an over-redundancy strategy in line with one hypothesis in the previous studies on second language acquisition in spoken languages. Significance: The results are discussed in terms of the vulnerable syntax–pragmatics interface in developing bilinguals
Article
Full-text available
Iconicity, a resemblance between properties of linguistic form (both in spoken and signed languages) and meaning, has traditionally been considered to be a marginal, irrelevant phenomenon for our understanding of language processing, development and evolution. Rather, the arbitrary and symbolic nature of language has long been taken as a design feature of the human linguistic system. In this paper, we propose an alternative framework in which iconicity in face-to-face communication (spoken and signed) is a powerful vehicle for bridging between language and human sensori-motor experience, and, as such, iconicity provides a key to understanding language evolution, development and processing. In language evolution, iconicity might have played a key role in establishing displacement (the ability of language to refer beyond what is immediately present), which is core to what language does; in ontogenesis, iconicity might play a critical role in supporting referentiality (learning to map linguistic labels to objects, events, etc., in the world), which is core to vocabulary development. Finally, in language processing, iconicity could provide a mechanism to account for how language comes to be embodied (grounded in our sensory and motor systems), which is core to meaningful communication.
Book
Full-text available
Gestures are often regarded as the most typical compensatory device used by language learners in communicative trouble. Yet gestural solutions to communicative problems have rarely been studied within any theory of second language use. The work presented in this volume aims to account for second language learners’ strategic use of speech-associated gestures by combining a process-oriented framework for communication strategies with a cognitive theory of gesture. Two empirical studies are presented. The production study investigates Swedish learners of French and French learners of Swedish and their use of strategic gestures. The results, which are based on analyses of both individual and group behaviour, contradict popular opinion as well as theoretical assumptions from both fields. Gestures are not primarily used to replace speech, nor are they chiefly mimetic. Instead, learners use gestures with speech, and although they do exploit mimetic gestures to solve lexical problems, they also use more abstract gestures to handle discourse-related difficulties and metalinguistic commentary. The influence of factors such as proficiency, task, culture, and strategic competence on gesture use is discussed, and the oral and gestural strategic modes are compared. In the evaluation study, native speakers’ assessments of learners’ gestures, and the potential effect of gestures on evaluations of proficiency are analysed and discussed in terms of individual communicative style. Compensatory gestures function at multiple communicative levels. This has implications for theories of communication strategies, and an expansion of the existing frameworks is discussed taking both cognitive and interactive aspects into account.
Article
Full-text available
In American Sign Language (ASL), native signers use eye gaze to mark agreement (Thompson, Emmorey and Kluender, 2006). Such agreement is unique (it is articulated with the eyes) and complex (it occurs with only two out of three verb types, and marks verbal arguments according to a noun phrase accessibility hierarchy). In a language production experiment using head-mounted eye-tracking, we investigated the extent to which eye gaze agreement can be mastered by late second-language (L2) learners. The data showed that proficient late learners (with an average of 18.8 years signing experience) mastered a cross-linguistically prevalent pattern (NP-accessibility) within the eye gaze agreement system but ignored an idiosyncratic feature (marking agreement on only a subset of verbs). Proficient signers produced a grammar for eye gaze agreement that diverged from that of native signers but was nonetheless consistent with language universals. A second experiment examined the eye gaze patterns of novice signers with less than two years of ASL exposure and of English-speaking non-signers. The results provided further evidence that the pattern of acquisition found for proficient L2 learners is directly related to language learning, and does not stem from more general cognitive processes for eye gaze outside the realm of language.
Article
Full-text available
The acquisition sequences of 11 English functors were compared for native Chinese- and Spanish-speaking children learning English. Three different methods of speech analysis used to obtain the sequences are described in detail. All three methods yielded approximately the same sequence of acquisition for both language groups. This finding provides strong support for the existence of universal child language learning strategies and suggests a program of research that could lead to their description.
Article
In this commentary on the article by Kidd and Garcia, we point out that research on natural signed languages is an important component of the goal of broadening the database of knowledge about how languages are acquired. While signed languages do display some modality effects, they also have many similarities to spoken languages, both in function and in form. Thus, research on signed languages and their acquisition is important for a fuller understanding of the diversity of languages. Since signed languages are often learned in contexts other than those of typical input, it is also important to document the effects of input variation; we also see it as critical that input be provided as early as possible from models as fluent as possible. Finally, we call for removing existing barriers to training and education for would-be researchers, especially those interested in working on signed languages. Importantly, we advocate for the recognition of signed languages, for signed language research, and for the empowerment of community members to lead this research.
Article
The sociolinguistics of sign languages parallels as well as complements the sociolinguistics of spoken languages. All of the key areas of sociolinguistics, such as multilingualism, language contact, variation, and language attitudes—are of immediate relevance to sign languages. At the same time, sign language researchers using a range of data sources and methods (e.g., sign language corpora, linguistic elicitation, and linguistic ethnography) have showed that the unique natures and features of sign languages allow us to look at all these areas from a different vantage point. First, deficit perspectives on deafness serve to sharply distinguish the reality of sign languages from that of spoken languages. The linguistic status of sign languages has been long contested, and certain forms of signing are still labeled “nonlanguage.” The delineation and differentiation of sign languages, and of sign languages from other signing practices (e.g., gesturing, home sign) has therefore been a key issue. Second, sign languages are used by both deaf and hearing people, in contexts where spoken/written languages, and increasingly also other sign languages are in use, leading to complex multimodal forms of sign–spoken, sign–written, and sign–sign language contact, and to hierarchical constellations of language attitudes and ideologies in relation to signed and spoken languages and variants. © 2021 The Authors. Journal of Sociolinguistics published by John Wiley & Sons Ltd.
Article
In Ontario, Canada, the movement toward inclusive education has led to a resource consultant model in early childhood education and care (ECEC). The purpose of this narrative case study was to analyze participant experiences involving a daycare setting previously attended by a young deaf child who benefits from American Sign Language (ASL). Semi-structured interviews were conducted with a community ASL instructor who was hired to provide services in a daycare, a parent of a young deaf child who attended this setting, and a child care resource consultant who worked with all participants. These interviews were used to construct a retrospective narrative of participant experiences in an Ontario ECEC setting. As study findings reveal, the current design of publicly funded early intervention sign language service to deaf children may sometimes be at odds with the ethos of an inclusive ECEC system.
Article
Signing systems that attempted to represent spoken language via manual signs – some invented, and some borrowed from natural sign languages – have historically been used in classrooms with deaf children. However, despite decades of research and use of these systems in the classroom, there is little evidence supporting their educational effectiveness. In this paper, the authors argue against the use of signing systems as instructional tools. This argument is based upon research demonstrating that (1) signing systems are less comprehensible to learners who rely upon signs rather than speech, (2) signing systems are used inconsistently by teachers, and (3) signing systems often unintentionally exhibit features of natural signed grammar, leading to input that does not accurately convey spoken languages, which is the original intention of these systems. Instead, the authors advocate for a return to the use of natural signed languages in classrooms educating deaf children, with creative uses of interpretation to provide those students who may prefer or benefit from spoken English with its presence in the classroom. In addition, we note ways in which future research may explore how natural sign languages and deaf adults may benefit the educational experiences of deaf children.
Article
Previous work indicates that 1) adults with native sign language experience produce more manual co-speech gestures than monolingual non-signers, and 2) one year of ASL instruction increases gesture production in adults, but not enough to differentiate them from non-signers. To elucidate these effects, we asked early ASL–English bilinguals, fluent late second language (L2) signers (≥ 10 years of experience signing), and monolingual non-signers to retell a story depicted in cartoon clips to a monolingual partner. Early and L2 signers produced manual gestures at higher rates compared to non-signers, particularly iconic gestures, and used a greater variety of handshapes. These results indicate susceptibility of the co-speech gesture system to modification by extensive sign language experience, regardless of the age of acquisition. L2 signers produced more ASL signs and more handshape varieties than early signers, suggesting less separation between the ASL lexicon and the co-speech gesture system for L2 signers.
Article
This study explores the L2M2 acquisition of Norwegian Sign Language by hearing adults, with a focus on the production and use of depicting signs. A group of students and their instructors were asked to respond to prompt questions about directions and locations in Norwegian Sign Language, and their responses were then compared. An examination of the students’ depicting signs shows that they struggled more with the phonological parameters of orientation and movement, rather than of handshape. In addition, they used fewer depicting signs than their instructors, and instead relied more on lexical signs. Finally, students were found to struggle with the coordination of depicting signs within the signing space and in relation to their own bodies. It is hoped that the findings from this study can be used to inform future research as well as curricula development and pedagogy.
Article
Victor, a biologically normal child of normal hearing and good intelligence, had almost no exposure to spoken language until he was three years old; his only language until that time was the sign language which he learned from his deaf-mute parents. Three structural features of sign language are described, and evidence is presented that Victor structured his sentences in those ways when his speech development began. It is argued that the presence of structural interference from sign language in Victor's speech suggests that the manner of functioning of our innate capacity to acquire speech may differ depending on the nature of prior linguistic experience.
Article
Sign languages are of great interest to linguists, because while they are the product of the same brain, their physical transmission differs greatly from that of spoken languages. In this 2006 study, Wendy Sandler and Diane Lillo-Martin compare sign languages with spoken languages, in order to seek the universal properties they share. Drawing on general linguistic theory, they describe and analyze sign language structure, showing linguistic universals in the phonology, morphology, and syntax of sign language, while also revealing non-universal aspects of its structure that must be attributed to its physical transmission system. No prior background in sign language linguistics is assumed, and numerous pictures are provided to make descriptions of signs and facial expressions accessible to readers. Engaging and informative, Sign Language and Linguistic Universals will be invaluable to linguists, psychologists, and all those interested in sign languages, linguistic theory and the universal properties of human languages.
Article
The present study implemented a sign-repetition task at two points in time to hearing adult learners of British Sign Language and explored how each phonological parameter, sign complexity, and iconicity affected sign production over an 11-week (22-hour) instructional period. The results show that training improves articulation accuracy and that some sign components are produced more accurately than others: Handshape was the most difficult, followed by movement, then orientation, and finally location. Iconic signs were articulated less accurately than arbitrary signs because the direct sign-referent mappings and perhaps their similarity with iconic co-speech gestures prevented learners from focusing on the exact phonological structure of the sign. This study shows that multiple phonological features pose greater demand on the production of the parameters of signs and that iconicity interferes in the exact articulation of their constituents.
Article
There has long been interest in why languages are shaped the way they are, and in the relationship between sign language and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British Sign Language (BSL) for 1-3 years, produce and comprehend classifiers in (static) locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal (high) accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult sign language acquisition might shed light on how gesture became (and, indeed, becomes) conventionalized during the genesis of sign languages.
Article
Kristina Svartholm, Ph. D. is Associate Professor of Scandinavian Languages at Stockholm University, where she teaches Swedish as a second language to deaf students. From the early 1980s she has conducted research in bilingual education for the deaf. and is advisor to Swedish educational authorities and the World Federation of the Deaf. This report, in a different version, will be published in Russian, in Alternative Approaches to Deaf Education, edited by Galina Zaitseva for the Moscow Bilingual School for the Deaf.
Chapter
This chapter reports on a study that investigates the phenomenon of "sign accent," or systematic phonological errors made by nonsigners attempting to mimic isolated ASL signs. The study has implications for sign language teaching, where people are learning an unfamiliar language in a modality new to them. The study finds two factors relevant to how well nonsigners produce the target handshape. One is markedness; anatomical features of the hand affect dexterity in making a sign, although with qualifications. This general finding is no surprise - studies of acquisition repeatedly show the relevance of phonetic markedness. The other factor, however, is surprising: transfer of phonological features from gestures hearing people make (with or without accompanying speech) affects the ability to mimic signs.
Article
A particular view of bilingualism — the monolingual (or fractional) view — has been given far too much importance in the study of bilinguals. According to it, the bilingual is (or should be) two monolinguals in one person. In this paper, the monolingual view is spelled out, and the negative consequences it has had on various areas of bilingual research are discussed. A bilingual (or wholistic) view is then proposed. According to it, the bilingual is not the sum of two complete or incomplete monolinguals; rather, he or she has a unique and specific linguistic configuration. This view is described and four areas, of research are discussed in its light: comparing monolinguals and bilinguals, language learning and language forgetting, the bilingual's speech modes, the bilingual child and ‘semilingualism’.
Article
Sign Language Studies 1.2 (2001) 110-114 Every deaf child, whatever the level of his/her hearing loss, should have the right to grow up bilingual. By knowing and using both a sign language and an oral language (in its written and, when possible, in its spoken modality), the child will attain his/her full cognitive, linguistic, and social capabilities. The deaf child has to accomplish a number of things with language: 1. Communicate with parents and family members as soon as possible. A hearing child normally acquires language in the very first years of life on the condition that he/she is exposed to a language and can perceive it. Language in turn is an important means of establishing and solidifying social and personal ties between the child and his/ her parents. What is true of the hearing child must also become true of the deaf child. He/she must be able to communicate with his/ her parents by means of a natural language as soon, and as fully, as possible. It is with language that much of the parent-child affective bonding takes place. 2. Develop cognitive abilities in infancy. Through language, the child develops cognitive abilities that are critical to his/her personal development. Among these we find various types of reasoning, abstracting, memorizing, etc. The total absence of language, the adoption of a non-natural language or the use of a language that is poorly perceived or known, can have major negative consequences on the child’s cognitive development. 3. Acquire world knowledge. The child will acquire knowledge about the world mainly through language. As he/she communicates with parents, other family members, children and adults, information about the world will be processed and exchanged. It is this knowledge, in turn, which serves as a basis for the activities that will take place in school. It is also world knowledge which facilitates language comprehension; there is no real language understanding without the support of this knowledge. 4. Communicate fully with the surrounding world. The deaf child, like the hearing child, must be able to communicate fully with those who are part of his/her life (parents, brothers and sisters, peers, teachers, various adults, etc.). Communication must take place at an optimal rate of information in a language that is appropriate to the interlocutor and the situation. In some cases it will be sign language, in other cases it will be the oral language (in one of its modalities), and sometimes it will be the two languages in alternation. 5. Acculturate into two worlds. Through language, the deaf child must progressively become a member of both the hearing and of the Deaf world. He/she must identify, at least in part, with the hearing world which is almost always the world of his/her parents and family members (90% of deaf children have hearing parents). But the child must also come into contact as early as possible with the world of the Deaf, his/her other world. The child must feel comfortable in these two worlds and must be able to identify with each as much as possible. Bilingualism is the knowledge and regular use of two or more languages. A sign language-oral language bilingualism is the only way that the deaf child will meet his/her needs, that is, communicate early with his/her parents, develop his/her cognitive abilities, acquire knowledge of the world, communicate fully with the surrounding world, and acculturate into the world of the hearing and of the Deaf. The bilingualism of the deaf child will involve the sign language used by the Deaf community and the oral language used by the hearing majority. The latter language will be acquired in its written, and if possible, in its spoken modality. Depending on the child, the two languages will play different roles: Some children will be dominant in sign language, others will be dominant in the oral language, and some will be balanced in their two languages. In addition, various types of bilingualism are possible since...
Article
This report outlines current theories and practices of second language teaching and suggests possible applications of these theories and practices to the teaching of American Sign Language to non-signers. (CFM)
Article
Given that the linguistic articulators for sign language are also used to produce co-speech gesture, we examined whether one year of academic instruction in American Sign Language (ASL) impacts the rate and nature of gestures produced when speaking English. A survey study revealed that 75% of ASL learners (N = 95), but only 14% of Romance language learners (N = 203), felt that they gestured more after one year of language instruction. A longitudinal study confirmed this perception. Twenty-one ASL learners and 20 Romance language learners (French, Italian, Spanish) were filmed re-telling a cartoon story before and after one academic year of language instruction. Only the ASL learners exhibited an increase in gesture rate, an increase in the production of iconic gestures, and an increase in the number of handshape types exploited in co-speech gesture. Five ASL students also produced at least one ASL sign when re-telling the cartoon. We suggest that learning ASL may (i) lower the neural threshold for co-speech gesture production, (ii) pose a unique challenge for language control, and (iii) have the potential to improve cognitive processes that are linked to gesture.
Article
When researchers investigate “transfer” in second language acquisition, they are often referring to the role that first language (L 1 ) structures play in second language (L 2 ) acquisition. Here we expand the discussion and look at transfer from another point of view, focusing on possible transfer of affective and communicative behaviors by both L 1 and L 2 learners, especially in facial behaviors. In ASL, facial expression functions in two distinct ways: to convey emotion as it does in spoken language, and to mark certain specific grammatical structures (e.g. topics, conditionals, relative clauses). Research on affective development suggests that specific facial expressions for emotion are universal, that children consistently use facial expression to convey emotional states by the end of their first year, and that deaf children begin to acquire the grammatical facial behaviors of ASL at about two years of age. The dual use in ASL of similar facial behaviors, to signal structure and emotion, presents a unique opportunity to explore the boundaries and interactions of two communicative systems: language and affect.
Article
The differences between Pidgin Sign English and American Sign Language in simultaneity, or the visible presence of two or more linguistic units (manual or nonmanual) co-occurring, are demonstrated. Differences are exemplified in handshape-classifier pronouns, directional verbs, co-occurring manual signs, and nonmanual behavior. (PMJ)
Article
This paper contains three parts. In the first part, what it means to be bilingual in sign language and the spoken (majority) language is explained, and similarities as well as differences with hearing bilinguals are discussed. The second part examines the biculturalism of deaf people. Like hearing biculturals, they take part, to varying degrees, in the life of two worlds (the deaf world and the hearing world), they adapt their attitudes, behaviors, and languages to both worlds, and they combine and blend aspects of the two. The decisional process they go through in choosing a cultural identity is discussed and the difficulties met by some groups are examined. The third part begins with a discussion of why early bilingualism is crucial for the development of deaf children. The reasons that bilingualism and biculturalism have not normally had the favor of those involved in nurturing and educating deaf children are then discussed. They are of two kinds: misunderstandings concerning bilingualism and sign language; and the lack of acceptance of certain realities by many professionals in deafness, most notably members of the medical world. The article ends with a discussion of the role of the two languages in the development of deaf children.