Article
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Deaf bilinguals for whom American Sign Language (ASL) is the first language and English is the second language judged the semantic relatedness of word pairs in English. Critically, a subset of both the semantically related and unrelated word pairs were selected such that the translations of the two English words also had related forms in ASL. Word pairs that were semantically related were judged more quickly when the form of the ASL translation was also similar whereas word pairs that were semantically unrelated were judged more slowly when the form of the ASL translation was similar. A control group of hearing bilinguals without any knowledge of ASL produced an entirely different pattern of results. Taken together, these results constitute the first demonstration that deaf readers activate the ASL translations of written words under conditions in which the translation is neither present perceptually nor required to perform the task.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Signed and spoken languages have very little articulatory or perceptual overlap, and signed languages have no standardized, widely-used written systems. Deaf and hearing bilingual adults nevertheless activate signs when reading printed words or listening to spoken words (e.g., written English-ASL: Meade, Midgley, Sehyr Sevcikova, Holcomb, & Emmorey, 2017;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011;Morford, Kroll, Piñar, & Wilkinson, 2014;Morford, Occhino, Piñar, Wilkinson, & Kroll, 2017;Morford et al., 2019;Quandt & Kubicek, 2018 Mendoza & Jackson Maldonado, 2020; spoken English-ASL: Giezen, Blumenfeld, Shook, Marian, & Emmorey, 2015;Shook & Marian, 2012;spoken Spanish-LSE: Villameriel, Dias, Costello, & Carreiras, 2016). For example, Morford et al. (2011) asked highly proficient deaf ASL-English bilinguals and non-signing hearing controls to decide if English word pairs were semantically related. ...
... Deaf and hearing bilingual adults nevertheless activate signs when reading printed words or listening to spoken words (e.g., written English-ASL: Meade, Midgley, Sehyr Sevcikova, Holcomb, & Emmorey, 2017;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011;Morford, Kroll, Piñar, & Wilkinson, 2014;Morford, Occhino, Piñar, Wilkinson, & Kroll, 2017;Morford et al., 2019;Quandt & Kubicek, 2018 Mendoza & Jackson Maldonado, 2020; spoken English-ASL: Giezen, Blumenfeld, Shook, Marian, & Emmorey, 2015;Shook & Marian, 2012;spoken Spanish-LSE: Villameriel, Dias, Costello, & Carreiras, 2016). For example, Morford et al. (2011) asked highly proficient deaf ASL-English bilinguals and non-signing hearing controls to decide if English word pairs were semantically related. The task included an implicit priming manipulationsome of the English word pairs had phonologically related ASL translations. ...
... To address this gap, the present study examined co-activation of ASL and written English in deaf bilingual middle school students, following the paradigm by Morford et al., 2011;adapted from Thierry & Wu, 2007. Two hypotheses were investigated. ...
Article
Bilinguals, both hearing and deaf, activate multiple languages simultaneously even in contexts that require only one language. To date, the point in development at which bilingual signers experience cross-language activation of a signed and a spoken language remains unknown. We investigated the processing of written words by ASL-English bilingual deaf middle school students. Deaf bilinguals were faster to respond to English word pairs with phonologically related translations in ASL than to English word pairs with unrelated translations, but no difference was found for hearing controls with no knowledge of ASL. The results indicate that co-activation of signs and written words is not the outcome of years of bilingual experience, but instead characterizes bilingual language development.
... Critically, the non-selective nature of bilingual lexical activation has also been shown in bilinguals with two languages of different modality (oral and sign), termed "bimodal bilinguals". A number of experiments have showed that deaf and hearing bimodal bilinguals activate sign properties when processing words (Kubus, Villwock, Morford & Rathmann, 2015;Morford, Kroll, Piñar & Wilkinson, 2014;Morford, Occhino-Kehoe, Piñar, Wilkinson & Kroll, 2017;Morford, Wilkinson, Villwock, Piñar & Kroll, 2011;Shook & Marian, 2012;Villameriel, Dias, Costello & Carreiras, 2016). For example, Morford et al. (2011) showed that phonological relationships in American Sign Language 1 (ASL) influenced semantic similarity judgements of written word pairs in English (see also Villameriel et al., 2016, for similar results with hearing bimodal bilinguals and Morford et al., 2014, for a different result with hearing bimodal bilinguals). ...
... A number of experiments have showed that deaf and hearing bimodal bilinguals activate sign properties when processing words (Kubus, Villwock, Morford & Rathmann, 2015;Morford, Kroll, Piñar & Wilkinson, 2014;Morford, Occhino-Kehoe, Piñar, Wilkinson & Kroll, 2017;Morford, Wilkinson, Villwock, Piñar & Kroll, 2011;Shook & Marian, 2012;Villameriel, Dias, Costello & Carreiras, 2016). For example, Morford et al. (2011) showed that phonological relationships in American Sign Language 1 (ASL) influenced semantic similarity judgements of written word pairs in English (see also Villameriel et al., 2016, for similar results with hearing bimodal bilinguals and Morford et al., 2014, for a different result with hearing bimodal bilinguals). ...
... Much scarcer is the evidence showing cross-linguistic influences of words on sign processing (Emmorey, Mott, Meade, Holcomb & Midgley, 2020;Giezen & Emmorey, 2016;Hosemann, Mani, Herrmann, Steinbach & Altvater-Mackensen, 2020;Lee, Meade, Midgley, Holcomb & Emmorey, 2019). Using the same paradigm as Morford et al. (2011), Lee et al. (2019) showed that hearing bimodal bilinguals were sensitive to the phonological relationship of the English-translations (i.e., rhymed) while judging the semantic relationship of ASL sign pairs. Relevant here, results were not replicated in the deaf group, unless deaf individuals were aware of the English phonological manipulation. ...
Article
Full-text available
To investigate cross-linguistic interactions in bimodal bilingual production, behavioural and electrophysiological measures (ERPs) were recorded from 24 deaf bimodal bilinguals while naming pictures in Catalan Sign Language (LSC). Two tasks were employed, a picture-word interference and a picture-picture interference task. Cross-linguistic effects were explored via distractors that were either semantically related to the target picture, to the phon-ology/orthography of the Spanish name of the target picture, or were unrelated. No semantic effects were observed in sign latencies, but ERPs differed between semantically related and unrelated distractors. For the form-related manipulation, a facilitation effect was observed both behaviourally and at the ERP level. Importantly, these effects were not influenced by the type of distractor (word/picture) presented providing the first piece of evidence that deaf bimodal bilinguals are sensitive to oral language in sign production. Implications for models of cross-linguistic interactions in bimodal bilinguals are discussed.
... Critically, the non-selective nature of bilingual lexical activation has also been shown in bilinguals with two languages of different modality (oral and signed), termed "bimodal bilinguals". A number of experiments have showed that Deaf and hearing bimodal bilinguals activate sign properties when processing words (Kubus, Villwock, Morford, & Rathmann, 2015;Morford, Kroll, Piñar, & Wilkinson, 2014; Occhino-Kehoe, Piñar, Wilkinson, & Kroll, 2017;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011;Shook & Marian, 2012;Villameriel, Dias, Costello, & Carreiras, 2016). For example, Morford et al. (2011) showed that phonological relationships in American Sign Language i (ASL) influenced semantic similarity judgements of written word pairs in English (see also Villameriel et al., 2016, for similar results with hearing bimodal bilinguals and Morford et al., 2014, for a different result with hearing bimodal bilinguals). ...
... A number of experiments have showed that Deaf and hearing bimodal bilinguals activate sign properties when processing words (Kubus, Villwock, Morford, & Rathmann, 2015;Morford, Kroll, Piñar, & Wilkinson, 2014; Occhino-Kehoe, Piñar, Wilkinson, & Kroll, 2017;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011;Shook & Marian, 2012;Villameriel, Dias, Costello, & Carreiras, 2016). For example, Morford et al. (2011) showed that phonological relationships in American Sign Language i (ASL) influenced semantic similarity judgements of written word pairs in English (see also Villameriel et al., 2016, for similar results with hearing bimodal bilinguals and Morford et al., 2014, for a different result with hearing bimodal bilinguals). ...
... In the present study, we explored cross-language, cross-modal interactions in Deaf These results provide further evidence of cross-linguistic interactions in deaf bimodal bilinguals, both in language comprehension (Lee et al., 2019;Morford et al., 2011) and language production (Emmorey et al., 2020;Giezen & Emmorey, 2016), and from the weaker L2 oral language onto the more dominant L1 sign language, a pattern seen in other studies within the oral modality (Bobb, Von Holzen, Mayor, Mani, & Carreiras, 2020;Holzen & Mani, 2014). This indicates that Deaf signers had attained sufficient proficiency in their L2 oral language to experience word influences during sign production (Van Hell & Tanner, 2012). ...
Preprint
The aim of the present study was to explore cross-linguistic interactions in language production when the language to be produced and the non-intended language are from different modalities. Concretely, we tested whether Deaf bimodal bilinguals are sensitive to oral language influences when they sign. To that end, 25 Deaf Catalan Sign Language (LSC)-Spanish bilinguals named pictures in LSC while ignoring either written distractor words in Spanish (picture-word interference) or distractor pictures (picture-picture interference). Cross-linguistic interactions were explored by means of behavioural and electrophysiological measures from three categories of distractors: semantically related, phonologically related in the oral language or unrelated to the target picture. No semantic effects were observed in sign latencies, but ERPs differed between semantically related and unrelated distractors. Considering the phonological manipulation, phono-translation facilitation was observed both behaviourally and at the ERPs level. Importantly, these effects were not determined by the modality in which distractors were presented. The present results reveal that oral language interacts with sign language production, even when there is no explicit presence of the oral language in the task. Implications for models of cross-linguistic interactions in bilingual language production are discussed.
... Received 21 September 2019; Received in revised form 23 March 2020; Accepted 30 March 2020 representations are activated by deaf signers during reading. For example, Morford, Wilkinson, Villwock, Piñar, and Kroll (2011) used a semantic relatedness paradigm in which participants were presented prime-target word pairs and asked to judge whether or not the pairs were semantically related. When the American Sign Language (ASL) translations were similar in their sign phonology, semantically related sign translation word pairs were judged more quickly while opposite, inhibitory effects on response time were seen when pairs were semantically unrelated. ...
... In light of Yan et al.'s (2015) finding that deaf readers have highly efficient parafoveal processing abilities, and the research showing that deaf readers activate sign phonological representations during reading (e.g., Meade et al., 2017;Morford et al., 2011 extended the parafoveal preview paradigm to look at sign phonological activation during sentence reading among signing deaf adults from Beijing. Their study used sign phonologically related, but semantically unrelated, word pairs with Chinese Sign Language (CSL) translations that were identical in at least two out of four sign phonological features (handshape, orientation, movement, and location). ...
... Working with a deaf research assistant, we selected 40 pairs of twocharacter Chinese words which had phonologically similar HKSL translations. Following Morford et al. (2011) and Pan et al. (2015), each pair overlapped in at least two sign phonological parameters but were unrelated in terms of orthographic appearance, pronunciation, and meaning. Ten of these pairs had sign translations that overlapped in all three main phonological parameters: handshape, location, and movement (see example in Fig. 1a). ...
Article
Research has found that deaf readers unconsciously activate sign translations of written words while reading. However, the ways in which different sign phonological parameters associated with these sign translations tie into reading processes have received little attention in the literature. In this study on Chinese reading, we used a parafoveal preview paradigm to investigate how four different types of sign phonologically related preview affect reading processes in adult deaf signers of Hong Kong Sign Language (HKSL). The four types of sign phonologically related preview-target pair were: (1) pairs with HKSL translations that overlapped in three parameters-handshape, location, and movement; (2) pairs that overlapped in only handshape and location; (3) pairs that only overlapped in handshape and movement; and (4) pairs that only overlapped in location and movement. Results showed that the handshape parameter was of particular importance as only sign translation pairs that had handshape among their overlapping sign phonological parameters led to early sign activation. Furthermore, we found that, compared to control previews, deaf readers took longer to read targets when the sign translation previews overlapped with targets in either handshape and movement or handshape, movement, and location. In contrast, fixation times on targets were shorter when previews and targets overlapped location and any single additional parameter-either handshape or movement. These results indicate that the phono-logical parameters of handshape, location, and movement are activated via orthography during Chinese reading and can have different effects on parafoveal processing in deaf signers of HKSL.
... Previous priming research showed that signing adults implicitly activate signs and their respective phonological forms when they are reading written words [e.g., ASL while reading English words: Morford et al., 2011Morford et al., , 2014Meade et al., 2017;Quandt and Kubicek, 2018; DGS while reading German words: Kubus et al., 2015;Hong Kong Sign Language (HKSL) while reading Cantonese words: Thierfelder et al., 2020]. These studies exploited pairs of written words that were not related in the written or phonological domain in a given spoken language, but shared sign units in a respective sign language, such as MOVIE and PAPER sharing location and handshape in ASL (here and in the following, capitals denote signed stimulus materials), but no speech sounds in spoken English. ...
... These studies exploited pairs of written words that were not related in the written or phonological domain in a given spoken language, but shared sign units in a respective sign language, such as MOVIE and PAPER sharing location and handshape in ASL (here and in the following, capitals denote signed stimulus materials), but no speech sounds in spoken English. When deaf signing participants had to detect semantic similarities for pairs of written words, implicit phonological priming of the underlying signs speeds responding and, vice versa, decisions about semantic differences for pairs of written words slowed down when they overlapped in sign phonology (for ASL: Morford et al., 2011Morford et al., , 2014for DGS: Kubus et al., 2015). In addition, Deaf native signers show phonological similarity effects in ASL when they have to recall lists of English written words (Miller, 2007). ...
... The children were faster to make "yes" decisions (the words are semantically related) when the ASL translations were phonologically related. As in previous studies for deaf adults (Morford et al., 2011), a subset of the presented semantically related and unrelated word pairs shared sign phonology in ASL. Children were faster to respond to written word pairs with phonological relations in ASL. ...
Article
Full-text available
Signed and written languages are intimately related in proficient signing readers. Here, we tested whether deaf native signing beginning readers are able to make rapid use of ongoing sign language to facilitate recognition of written words. Deaf native signing children (mean 10 years, 7 months) received prime target pairs with sign word onsets as primes and written words as targets. In a control group of hearing children (matched in their reading abilities to the deaf children, mean 8 years, 8 months), spoken word onsets were instead used as primes. Targets (written German words) either were completions of the German signs or of the spoken word onsets. Task of the participants was to decide whether the target word was a possible German word. Sign onsets facilitated processing of written targets in deaf children similarly to spoken word onsets facilitating processing of written targets in hearing children. In both groups, priming elicited similar effects in the simultaneously recorded event related potentials (ERPs), starting as early as 200 ms after the onset of the written target. These results suggest that beginning readers can use ongoing lexical processing in their native language – be it signed or spoken – to facilitate written word recognition. We conclude that intimate interactions between sign and written language might in turn facilitate reading acquisition in deaf beginning readers.
... With this increasing awareness of bimodal bilingualism, the need to understand lexical processing of both languages as they relate to each other is gaining scientific attention. The view that all readers, deaf or hearing, rely primarily on phonological decoding to recognize written words of a spoken language (Hanson, 1989;Mayer & Trezak, 2014;Wang, Trezek, Luckner, & Paul, 2008) has come under greater scrutiny due to investigations that control for the bilingual experience of the deaf participants (Barca, Pezzulo, Castrataro, Rinaldi, & Caselli, 2013) or that investigate how knowledge of a signed language may impact processing of written words in a spoken language (Meade, Midgley, Sevcikova, Holcomb, & Emmorey, 2017;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011;Quandt & Kubicek, 2018). Although bilingual lexical processing of both words and signs has been studied in hearing bimodal bilinguals (Emmorey, Borinstein, & Thompson et al., 2005;Emmorey, Petrich, & Gollan, 2012;Giezen, Blumenfeld, Shook, Marian, & Emmorey, 2015), no study to date has evaluated the automaticity of lexical access for words versus signs in deaf bilinguals in a single paradigm. ...
... Although the current study did not include a between language condition, the results are generally consistent with prior claims that cross-language similarity leads to greater competition in mixed-language contexts. Although there is evidence that all bilinguals, regardless of language similarity, activate both languages during lexical processing (e.g., Morford et al., 2011 there is no explicit language mixing included in the study design. ...
... Villwock,Morford, & Rathmann, 2015;Hosemann, Mani, Herrmann, Steinbach, & Altvater-Mackensen, 2020;Meade et al., 2017;Morford et al., 2011;Morford, Kroll, Piñar, & Wilkinson, 2014;Morford, Occhino, Piñar, Wilkinson, & Kroll, 2017;Morford, Occhino, Zirnstein, Kroll, Wilkinson, & Piñar, 2019; Quandt & Kubicek, 2018) and hearing(Giezen et al., 2015;Marian & Spivey, 2003;Morford et al., 2014;Shook & Marian, 2012) bilingual signers. ...
Article
Full-text available
The well-known Stroop interference effect has been instrumental in revealing the highly automated nature of lexical processing as well as providing new insights to the underlying lexical organization of first and second languages within proficient bilinguals. The present cross-linguistic study had two goals: 1) to examine Stroop interference for dynamic signs and printed words in deaf ASL-English bilinguals who report no reliance on speech or audiological aids; 2) to compare Stroop interference effects in several groups of bilinguals whose two languages range from very distinct to very similar in their shared orthographic patterns: ASL-English bilinguals (very distinct), Chinese-English bilinguals (low similarity), Korean-English bilinguals (moderate similarity), and Spanish-English bilinguals (high similarity). Reaction time and accuracy were measured for the Stroop color naming and word reading tasks, for congruent and incongruent color font conditions. Results confirmed strong Stroop interference for both dynamic ASL stimuli and English printed words in deaf bilinguals, with stronger Stroop interference effects in ASL for deaf bilinguals who scored higher in a direct assessment of ASL proficiency. Comparison of the four groups of bilinguals revealed that the same-script bilinguals (Spanish-English bilinguals) exhibited significantly greater Stroop interference effects for color naming than the other three bilingual groups. The results support three conclusions. First, Stroop interference effects are found for both signed and spoken languages. Second, contrary to some claims in the literature about deaf signers who do not use speech being poor readers, deaf bilinguals' lexical processing of both signs and written words is highly automated. Third, cross-language similarity is a critical factor shaping bilinguals' experience of Stroop interference in their two languages. This study represents the first comparison of both deaf and hearing bilinguals on the Stroop task, offering a critical test of theories about bilingual lexical access and cognitive control.
... Neville and her colleagues suggested that this may be because the native language of the deaf signers (ASL) is visual, and therefore relies more on RH mechanisms than on LH mechanisms. Morford, Wilkinson, Villwock, Piñar, and Kroll (2011) suggested that when signing deaf individuals read English, the English words are automatically translated into their sign equivalents, which could result in a right hemisphere rather than a left hemisphere specialization for reading in deaf individuals. Such atypical hemispheric activation suggests that deaf individuals may rely on non-phonological knowledge during the processing of written materials. ...
... Research on deaf readers on the other hand, demonstrates that their RH plays a more significant role than the LH during reading (Corina, Lawyer, Hauser, & Hirshorn, 2013;Neville et al., 1998;Sanders et al., 1989). The findings reported by Morford et al. (2011) may suggest the mechanism by which this hemispheric change occurs: They report evidence that when deaf signers read English, the English words are automatically translated into their sign equivalents. Given that sign language relies on the RH more than oral language, it may be that the RH is more involved in reading in English among deaf signers than in hearing readers. ...
Article
This study explored differences between the two hemispheres in processing written words among deaf readers. The main hypothesis was that impoverished phonological abilities of deaf readers may lead to atypical patterns of hemispheric involvement. To test this, deaf participants completed a metalinguistic awareness test to evaluate their orthographic and phonological awareness. Additionally, they were asked to read biased or neutral target sentences ending with an ambiguous homograph, with each sentence followed by the request to make a rapid lexical decision on a target word presented either to the left (LH) or right hemisphere (RH). Targets were either related to the more frequent, dominant, meaning of the homograph, to the less frequent, subordinate, meaning of the homograph or were not related at all. An Inverse Efficiency Score based on both response latency and accuracy was calculated and revealed that deaf readers’ RH perform better than their LH. In contrast to hearing readers who in previous studies manifested left hemisphere dominance when completed the same research design. The apparent divergence of deaf readers’ hemisphere lateralization from that of hearing counterparts seems to validate previous findings suggesting greater reliance on RH involvement among deaf individuals during visual word recognition.
... However, signing bilinguals provide an even stronger case for exploring whether non-selective access is truly characteristic of bilingual lexical processing, or is a narrower phenomenon linked to bilinguals' use of overlapping forms across their languages. Morford, Wilkinson, Villwock, Piñar, and Kroll (2011) explored this question by adopting Thierry and Wu's (2007) implicit priming paradigm. ...
... These studies provide ample evidence that lexical access is non-selective for signing bilinguals, but they do not distinguish between activation spreading between the languages via lexical links, semantic links, or via top-down controlled processes. In order to rule out the possibility that participants were strategically translating English stimulus words into ASL after accessing the meaning of the English words, Morford, Occhino-Kehoe, Piñar, Wilkinson, & Kroll (2017) replicated the implicit priming study of Morford et al. (2011), but shortened the time course of the experiment to just 300 ms stimulus onset asynchrony. Their replication of cross-language implicit priming despite the much shorter time course would be consistent with the argument that activation is not the result of conscious translation. ...
Chapter
In the last two decades there has been an upsurge of research on the cognitive and neural basis of bilingualism. The initial discovery that the bilingual’s two languages are active regardless of the intention to use one language alone, now replicated in hundreds of studies, has shaped the research agenda. The subsequent research has investigated the consequences of parallel activation of the two languages and considered the circumstances that might constrain language nonselectivity. At the same time, there has been emerging recognition that not all bilinguals are the same. Bilingualism takes different forms across languages and across unique interactional contexts. Understanding variation in language experience becomes a means to identify those linguistic, cognitive, and neural consequences of bilingualism that are universal and those that are language and situation specific. From this perspective, individuals who sign one language and speak or read the other, become a critical source of information. The distinct features of sign, and the differences between sign and speech, become a tool that can be exploited to examine the mechanisms that enable dual language use and the consequences that bilingualism imposes on domain general cognition. In this chapter, we review the recent evidence on bilingualism for both deaf and hearing signers. Our review suggests that many of the same principles that characterize spoken bilingualism can be seen in bilinguals who sign one language and speak or read the other. That conclusion does not imply that deaf vs. hearing language users are identical or that languages in different modalities are the same. Instead, the evidence suggests that the co-activation of a bilingual’s two languages comes to shape the functional signatures of bilingualism in ways that are universal and profound.
... Or are the two linguistic systems interconnected, the effects MOVING FROM BILINGUAL TRAITS TO STATES 5 of which can be observed even in monolingual environments? Findings from bilingual comprehension studies routinely show that words from both languages are simultaneously activated even in single-language contexts, including when a bilingual's two languages use different scripts or modalities (Hoshino & Kroll, 2008;Marian & Spivey, 2003;Morford et al., 2011;Slevc et al., 2016;Spivey & Marian, 1999;Thierry & Wu, 2007). That is, as spoken words unfold in time, bilinguals do not limit retrieval to the language currently in use, as lexical items from the currently unused language partially compete for selection. ...
... In contrast, bimodal bilinguals, such as speech-sign bilinguals, are not constrained to a single modality (Emmorey et al., 2008a). If competition for selection within the same modality is one underlying cause of increased reliance on and efficiency in cognitive control, then bimodal bilingualism may place different demands on bilingual language use (Blanco-Elorrieta et al., 2018;Emmorey et al., 2008b), despite the continuing presence of bilingual language co-activation across modalities (e.g., Morford et al., 2011;Pyers & Emmorey, 2008). ...
Article
Full-text available
The study of how bilingualism is linked to cognitive processing, including executive functioning, has historically focused on comparing bilinguals to monolinguals across a range of tasks. These group comparisons presume to capture relatively stable cognitive traits and have revealed important insights about the architecture of the language processing system that could not have been gleaned from studying monolinguals alone. However, there are drawbacks to using a groupcomparison, or Traits, approach. In this theoretical review, we outline some limitations of treating executive functions as stable traits and of treating bilinguals as a uniform group when comparing to monolinguals. To build on what we have learned from group comparisons, we advocate for an emerging complementary approach to the question of cognition and bilingualism. Using an approach that compares bilinguals to themselves under different linguistic or cognitive contexts allows researchers to ask questions about how language and cognitive processes interact based on dynamically fluctuating cognitive and neural states. A States approach, which has already been used by bilingualism researchers, allows for cause-and-effect hypotheses and shifts our focus from questions of group differences to questions of how varied linguistic environments influence cognitive operations in the moment and how fluctuations in cognitive engagement impact language processing.
... For example, Morford, Wilkinson, Villwock, Piñar and Kroll (2011) found that deaf American Sign Language (ASL)-English bilinguals were faster to decide that two English words were semantically related (e.g., bird and duck) when the ASL sign translation equivalents of these words overlapped in sign phonology (the signs BIRD and DUCK have the same location and movement and only differ in handshape). Conversely, they were slower to decide that two printed English words were not semantically related when their ASL translation equivalents overlapped in sign phonology. ...
... Our findings extend previous studies with deaf and hearing signers that found co-activation of signs during the processing of written or spoken words (Giezen et al., 2015;Meade et al., 2017;Morford et al., 2011Morford et al., , 2014Morford et al., , 2017Ormel, 2008;Ormel et al., 2012;Shook & Marian, 2012;Villameriel et al., 2016) by showing that cross-language activation also occurs in the reverse direction, i.e., the co-activation of the spoken language during the processing of signs. The present results are consistent with the findings in two recent studies where ERP recordings revealed co-activation of words during sign processing (Hosemann et al., 2020;Lee et al., 2019), although these studies did not investigate the effect of mouthings. ...
Article
Full-text available
The present study provides insight into cross-language activation in hearing bimodal bilinguals by (1) examining co-activation of spoken words during processing of signs by hearing bimodal bilingual users of Dutch (their L1) and Sign Language of the Netherlands (NGT; late learners) and (2) investigating the contribution of mouthings to bimodal cross-language activation. NGT signs were presented with or without mouthings in two sign-picture verification experiments. In both experiments the phonological relation (unrelated, cohort overlap or final rhyme overlap) between the Dutch translation equivalents of the NGT signs and pictures was manipulated. Across both experiments, the results showed slower responses for sign-picture pairs with final rhyme overlap relative to phonologically unrelated sign-picture pairs, indicating co-activation of the spoken language during sign processing, but no significant effect for sign-picture pairs with cohort overlap in Dutch. In addition, co-activation was not affected by the presence or absence of mouthings.
... The influence of the native language (L1) on the second (L2) or third language (L3) and vice versa has been studied across different linguistic domains, for example phonology and syntax (Cárdenas-Hagan, Carlson, & Pollard-Durodola, 2007;Pika, Nicoladis, & Marentette, 2006). A number of studies focusing on CLI between the languages in a multilingual system showed that it occurred independently of the L2 proficiency (Dijkstra & Van Heuven, 2002;Kroll & Tokowicz, 2005), the linguistic similarity between the languages Cutler, Weber, & Otake, 2006), the orthographic systems of the languages (Hoshino & Kroll, 2008) and their written scripts (Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011). Furthermore, CLI was demonstrated to occur in language production as well as comprehension, at different ages of acquisition (AoA) both in adults (Hoshino & Kroll, 2008) and in children (Poarch & Van Hell, 2012). ...
Article
Full-text available
This study investigated cross-linguistic interference in German low-proficient late learners of Spanish. We examined the modulating influence of gender congruency and cognate status using a syntactic violation paradigm. Behavioural results demonstrated that participants were more sensitive to similarities at the syntactic level (gender congruency) than to phonological and orthographic overlap (cognate status). Electrophysiological data showed that they were sensitive to syntactic violations (P600 effect) already in early acquisition stages. However, P600 effect sizes were not modulated by gender congruency or cognate status. Therefore, our late learners of Spanish did not seem to be susceptible to influences from inherent noun properties when processing non-native noun phrases at the neural level. Our results contribute to the discussion about the neural correlates of grammatical gender processing and sensitivity to syntactic violations in early acquisition stages.
... Despite the modality difference, bimodal bilingual speakers and unimodal bilinguals share a number of similarities. A neuropsychological finding is that highly proficient bimodal and unimodal bilinguals constantly have to suppress one language while using the other language, because both languages are constantly active (Morford et al. 2011;Chabal and Marian 2015). This constant suppression of the "unwanted" language has been suggested as a disadvantage in their lexical retrieval (i.e., how fast they find a word) compared to their monolingual peers (Bialystok et al. 2008;Bialystok 2009;Duñabeitia et al. 2013;Giezen and Emmorey 2017). ...
Article
Full-text available
How does bimodal bilingualism—a signed and a spoken language—influence the writing process or the written product? The writing outcomes of twenty deaf and hard of hearing (DHH) children and hearing children of deaf adults (CODA) (mean 11.6 years) with similar bimodal bilingual backgrounds were analyzed. During the writing of a narrative text, a keylogging tool was used that generated detailed information about the participants’ writing process and written product. Unlike earlier studies that have repeatedly shown that monolingual hearing children outperform their DHH peers in writing, there were few differences between the groups that likely were caused by their various hearing backgrounds, such as in their lexical density. Signing knowledge was negatively correlated with writing flow and pauses before words, and positively correlated with deleted characters, but these did not affect the written product negatively. Instead, they used different processes to reach similar texts. This study emphasizes the importance of including and comparing participants with similar language experience backgrounds. It may be deceptive to compare bilingual DHH children with hearing children with other language backgrounds, risking showing language differences. This should always be controlled for through including true control groups with similar language experience as the examined groups.
... Bi-directional effects have been observed in the context of cross-modality language learning as well. Work by Morford et al. has shown that ASL signs are activated during English print word recognition in highly proficient ASL-English bilinguals, irrespective of language dominance (Morford et al., 2011(Morford et al., , 2014. Similar effects have been reported for DGS (Deutsche Gebärdensprache, German Sign Language)-German bimodal bilinguals (Kubus et al., 2015;Hosemann et al., 2020). ...
Article
Full-text available
Previous work on placement expressions (e.g., “she put the cup on the table”) has demonstrated cross-linguistic differences in the specificity of placement expressions in the native language (L1), with some languages preferring more general, widely applicable expressions and others preferring more specific expressions based on more fine-grained distinctions. Research on second language (L2) acquisition of an additional spoken language has shown that learning the appropriate L2 placement distinctions poses a challenge for adult learners whose L2 semantic representations can be non-target like and have fuzzy boundaries. Unknown is whether similar effects apply to learners acquiring a L2 in a different sensory-motor modality, e.g., hearing learners of a sign language. Placement verbs in signed languages tend to be highly iconic and to exhibit transparent semantic boundaries. This may facilitate acquisition of signed placement verbs. In addition, little is known about how exposure to different semantic boundaries in placement events in a typologically different language affects lexical semantic meaning in the L1. In this study, we examined placement event descriptions (in American Sign Language (ASL) and English) in hearing L2 learners of ASL who were native speakers of English. L2 signers' ASL placement descriptions looked similar to those of two Deaf, native ASL signer controls, suggesting that the iconicity and transparency of placement distinctions in the visual modality may facilitate L2 acquisition. Nevertheless, L2 signers used a wider range of handshapes in ASL and used them less appropriately, indicating that fuzzy semantic boundaries occur in cross-modal L2 acquisition as well. In addition, while the L2 signers' English verbal expressions were not different from those of a non-signing control group, placement distinctions expressed in co-speech gesture were marginally more ASL-like for L2 signers, suggesting that exposure to different semantic boundaries can cause changes to how placement is conceptualized in the L1 as well.
... The reason that there are shorter reading times and regressions for homophones, compared to unrelated words, is that the homophones share the same sound phonology as the correctly spelled words. The literature suggests that deaf readers do engage in sign phonology during reading (Bélanger, Morford, & Rayner, 2013;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011;Ormel, Hermans, Knoors, & Verhoeven, 2012;Pan, Shu, Wang, & Yan, 2015;Treiman & Hirsh-Pasek, 1983). However, it is not known whether deaf readers who engage in sign phonology during reading are less skilled deaf readers. ...
Article
Full-text available
Phonological coding plays an important role in reading for hearing students. Experimental findings regarding phonological coding in deaf readers are controversial, and whether deaf readers are able to use phonological coding remains unclear. In the current study we examined whether Chinese deaf students could use phonological coding during sentence reading. Deaf middle school students, chronological age-matched hearing students, and reading ability-matched hearing students had their eye movements recorded as they read sentences containing correctly spelled characters, homophones, or unrelated characters. Both hearing groups had shorter total reading times on homophones than they did on unrelated characters. In contrast, no significant difference was found between homophones and unrelated characters for the deaf students. However, when the deaf group was divided into more-skilled and less-skilled readers according to their scores on reading fluency, the homophone advantage noted for the hearing controls was also observed for the more-skilled deaf students.
... The authors reported that accuracy and response times were in hibited in nonmatching conditions (i.e., printed word and picture did not match) where the sign translation equivalents of the print-picture pairs had strong sign phonological overlap in comparison to nonmatching print-picture pairs whose NGT sign translation equivalents were phonologically unrelated. Morford, Wilkinson, Villwock, Piñar, and Kroll (2011) report similar findings for bilingual (ASL-English) adult readers. Morford and col leagues (2011) compared processing of written English word pairs with ASL translation equivalents that shared at least two of three phonological parameters (handshape, loca tion, movement) to word pairs with ASL translation equivalents that did not overlap phonologically. ...
Chapter
Full-text available
... Hochman et al. / Cognitive Science 44 (2020) 5 of 28 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 Korvorst, Nuerk, & Willmes, 2007). Indeed, studies suggest that the mental representations of sign language signs (i.e., sign language words) correspond to their bodily spatial signing procedure (Dye & Shih, 2006;Morford, Wilkinson, Villwock, Piñar, & Judith, 2012;Wilson & Emmorey, 1998). In that sense, deaf signers hold mental embodied representations of sign language signs. ...
Article
Representations of the fingers are embodied in our cognition and influence performance in enumeration tasks. Among deaf signers, the fingers also serve as a tool for communication in sign language. Previous studies in normal hearing (NH) participants showed effects of embodiment (i.e., embodied numerosity) on tactile enumeration using the fingers of one hand. In this research, we examined the influence of extensive visuo‐manual use on tactile enumeration among the deaf. We carried out four enumeration task experiments, using 1–5 stimuli, on a profoundly deaf group (n = 16) and a matching NH group (n = 15): (a) tactile enumeration using one hand, (b) tactile enumeration using two hands, (c) visual enumeration of finger signs, and (d) visual enumeration of dots. In the tactile tasks, we found salient embodied effects in the deaf group compared to the NH group. In the visual enumeration of finger signs task, we controlled the meanings of the stimuli presentation type (e.g., finger‐counting habit, fingerspelled letters, both or neither). Interestingly, when comparing fingerspelled letters to neutrals (i.e., not letters or numerical finger‐counting signs), an inhibition pattern was observed among the deaf. The findings uncover the influence of rich visuo‐manual experiences and language on embodied representations. In addition, we propose that these influences can partially account for the lag in mathematical competencies in the deaf compared to NH peers. Lastly, we further discuss how our findings support a contemporary model for mental numerical representations and finger‐counting habits.
... Recent behavioral and electrophysiological evidence suggests that L1 translations were automatically activated in L2 word recognition even among advanced L2 speakers, indicating the continued involvement of the L2-L1 links initially established (e.g., Jiang, Li, & Guo, 2020;Thierry & Wu, 2004Wu & Thierry, 2010). The same phenomenon has been observed in bimodal bilinguals whose first language was a sign language (Meade et al., 2017;Morford et al., 2011;Villameriel et al., 2016). These results suggest that the initial mode or route of lexical access is not abandoned with increased L2 experience. ...
Article
Full-text available
In studying the relationship between word recognition and reading development, a distinction is made between analytic and holistic processing of words. These strategies are often assessed in a length effect in an alphabetic language or in a stroke‐number effect in a logographic language. Analytical processing is associated with a robust length or stroke‐number effect while holistic processing is reflected in smaller or a lack of such effects. Research has shown that skilled readers employ holistic processing while less skilled readers rely more on analytical processing. The present study examined analytic versus holistic word recognition among second language learners by comparing learners of Chinese as a second language (CSL) and Chinese native speakers (NSs) in a lexical decision task. Thirty Chinese NSs and 28 CSL learners were tested on 90 disyllabic Chinese words that varied in stroke number from 5 to 27. A robust stroke‐number effect was found among CSL participants but not among NS controls. The findings raised a number of theoretical and pedagogical issues in relation to word recognition and reading development among CSL learners and among second language learners in general.
... This result is observed regardless of whether the two languages are similar or distinct (e.g. Hoshino & Kroll, 2008;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011), suggesting that the co-activation of the two languages is a feature of being bilingual, not a property of the languages themselves. Critically, the activation of the two languages does not depend on the requirement to use both languages in the same context. ...
Article
Full-text available
A goal of early research on language processing was to characterize what is universal about language. Much of the past research focused on native speakers because the native language has been considered as providing privileged truths about acquisition, comprehension, and production. Populations or circumstances that deviated from these idealized norms were of interest but not regarded as essential to our understanding of language. In the past two decades, there has been a marked change in our understanding of how variation in language experience may inform the central and enduring questions about language. There is now evidence for significant plasticity in language learning beyond early childhood, and variation in language experience has been shown to influence both language learning and processing. In this paper, we feature what we take to be the most exciting recent new discoveries suggesting that variation in language experience provides a lens into the linguistic, cognitive, and neural mechanisms that enable language processing.
... A common finding across these studies is that there is a response time interference effect for semantically unrelated trials with phonologically related ASL translations when compared to those with a phonologically unrelated ASL translation (see Emmorey et al., 2016, andOrmel &Giezen, 2014 for reviews). These effects may be modulated by English reading ability, language proficiency (Morford et al., 2014;Morford et al., 2011), and conscious awareness of task constraints (Meade et al., 2017). These interference effects may be quite general and have been documented for nonalphabetic orthographies as well. ...
Article
This study investigates reading comprehension in adult deaf and hearing readers. Using correlational analysis and stepwise regression, we assess the contribution of English language variables (e.g., vocabulary comprehension, reading volume, and phonological awareness), cognitive variables (e.g., working memory (WM), nonverbal intelligence, and executive function), and language experience (e.g., language acquisition and orthographic experience) in predicting reading comprehension in deaf and hearing adult bilinguals (native American Sign Language (ASL) signers, non-native ASL signers, and Chinese–English bilinguals (CEB)), and monolingual (ML) controls. For all four groups, vocabulary knowledge was a strong contributor to reading comprehension. Monolingual English speakers and non-native deaf signers also showed contributions from WM and spoken language phonological awareness. In contrast, CEB showed contributions of lexical strategies in English reading comprehension. These cross-group comparisons demonstrate how the inclusion of multiple participant groups helps us to further refine our understanding of how language and sensory experiences influence reading comprehension.
... It is well-known that different linguistic systems interfere with each other at a variety of linguistic levels (Costa, Colomé, Gómez, & Sebastián-Gallés, 2003;Flege & Port, 1981;Odlin, 1989;Thierry & Wu, 2007), notably in syntactic encoding (Cai, Pickering, Yan, & Branigan, 2011;Hartsuiker, Pickering, & Veltkamp, 2004;Nicoladis, 2006;Yip & Matthews, 2000). Thus, when a deaf speaker read or write, they are likely to also activate corresponding representations in their sign language (as a first/dominant language; e.g., Meade, Midgley, Sehyr, Holcomb, & Emmorey, 2017;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011;Thierfelder, Wigglesworth, & Tang, 2020a). Indeed, by examining written productions of English by deaf users Hong Kong Sign Language, Thierfelder and Stapleton (2016) showed that these writings contain linguistic errors (e.g., misuse of articles and count/mass nouns) that can be attributed to crosslinguistic influence from their first language, Hong Kong Sign Language. ...
Preprint
Full-text available
It remains unclear whether deaf and hearing speakers differ in the processes and representations underlying written language production. Using the structural priming paradigm, this study investigated syntactic and lexical influences on syntactic encoding in writing by deaf speakers of Chinese in comparison with hearing controls. Experiment 1 showed that deaf speakers tended to re-use a prior syntactic structure in written sentence production (i.e., structural priming) to the same extent as hearing speakers did; in addition, such a tendency was enhanced when the target sentence repeated the verb from the prime sentence (i.e., lexical boost) in both deaf and hearing speakers to the same extent. These results suggest that deaf and hearing speakers are similarly affected by syntactic and lexical factors in syntactic encoding in writing. Experiment 2 revealed comparable boosts in structural priming between prime-target pairs with homographic homophone verbs and prime-target pairs with heterographic homophone verbs in hearing speakers, but a boost for prime-target pairs with homographic homophone verbs but not those with heterographic homophone verbs in deaf speakers. These results suggest that while syntactic encoding in writing is influenced by lemma associations developed for homophones as a result of phonological identity in hearing speakers, it is influenced by lemma associations developed for homographs as a result of orthographic identity in deaf speakers. In all, syntactic encoding in writing seems to employ the same syntactic and lexical representations in hearing and deaf speakers, though lexical representations are shaped more by orthography than phonology in deaf speakers.
... This result seems consistent with the findings by Ormel et al. (2012) that deaf children are sensitive to phonological information in sign translation equivalents of Dutch words during visual word recognition. Similar results have been found in deaf teenagers (Villwock et al. 2021) and deaf adult readers (Morford et al. 2011(Morford et al. , 2017. This result is also in line with other recent studies showing positive correlations between sign language knowledge and reading abilities (Scott and Hoffmeister 2016;Crume et al. 2021;Keck and Wolgemuth 2020;Holmer et al. 2016). ...
Article
Full-text available
Reading continues to be a challenging task for most deaf children. Bimodal bilingual education creates a supportive environment that stimulates deaf children’s learning through the use of sign language. However, it is still unclear how exposure to sign language might contribute to improving reading ability. Here, we investigate the relative contribution of several cognitive and linguistic variables to the development of word and text reading fluency in deaf children in bimodal bilingual education programmes. The participants of this study were 62 school-aged (8 to 10 years old at the start of the 3-year study) deaf children who took part in bilingual education (using Dutch and Sign Language of The Netherlands) and 40 age-matched hearing children. We assessed vocabulary knowledge in speech and sign, phonological awareness in speech and sign, receptive fingerspelling ability, and short-term memory at time 1 (T1). At times 2 (T2) and 3 (T3), we assessed word and text reading fluency. We found that (1) speech-based vocabulary strongly predicted word and text reading at T2 and T3, (2) fingerspelling ability was a strong predictor of word and text reading fluency at T2 and T3, (3) speech-based phonological awareness predicted word reading accuracy at T2 and T3 but did not predict text reading fluency, and (4) fingerspelling and STM predicted word reading latency at T2 while sign-based phonological awareness predicted this outcome measure at T3. These results suggest that fingerspelling may have an important function in facilitating the construction of orthographical/phonological representations of printed words for deaf children and strengthening word decoding and recognition abilities.
... The discrepancy between the results of this study and our parity judgment data could therefore be explained by the specificities of the sign language used by the participants. Within this context, the mental representations of numbers in deaf individuals may correspond to the bodily spatial signing procedure [36][37][38] , meaning that deaf signers hold mental embodied representations of sign language signs (see also 39 ). Another phenomenon to consider is that it is typical in signed face-to-face communication that the signer produces ordered sequences from left-to-right, while the observer perceives the signs with a right-to-left orientation. ...
Article
Full-text available
The literature suggests that deaf individuals lag behind their hearing peers in terms of mathematical abilities. However, it is still unknown how unique sensorimotor experiences, like deafness, might shape number-space interactions. We still do not know either the spatial frame of reference deaf individuals use to map numbers onto space in different numerical tasks. To examine these issues, deaf, hearing signer and hearing control adults were asked to perform a number comparison and a parity judgment task with the hands uncrossed and crossed over the body midline. Deafness appears to selectively affect the performance of the numerical task relying on verbal processes while keeping intact the task relying on visuospatial processes. Indeed, while a classic SNARC effect was found in all groups and in both hand postures of the number comparison task, deaf adults did not show the SNARC effect in both hand postures of the parity judgment task. These results are discussed in light of the spatial component characterizing the counting system used in sign language.
... Az empirikus kutatások alapján kijelenthető, hogy a kétnyelvű mindkét nyelve aktív, és állandóan verseng egymással, és ez minden nyelvi szinten megtapasztalható. A lexikai szinten akkor is kimutatható a nyelvek egymásra hatása, ha a kétnyelvűnek csak az egyik nyelvét kell használnia, ha a kétnyelvű magas szintű nyelvtudással rendelkezik a második nyelvén, és még akkor is, ha a két nyelv tipológiailag (például angol-kínai) vagy modalitásában (beszélt-jelelt) távol áll egymástól (például Hoshino-Kroll 2008;Morford et al. 2011). ...
Article
Full-text available
This study examines the impact of different language learning contexts on the mother tongue. The study covers the spoken and written language production of Russian-speaking students starting their studies in Hungary and compares their results with those of their English-speaking and monolingual peers in Russia. The research instruments include a language use and proficiency questionnaire, semantic and letter fluency tests, storytelling on the basis of a comic strip, and written production. The study is longitudinal: participants′ language performance is measured at the start of the study and after four months. The results show the impact of the non-native language learning environment on the native language, as reflected in a decrease in vocabulary richness and an increase in the percentage of pauses.
... This question should be addressed for three reasons. First, signers have been found to co-activate signs while reading (Morford et al., 2011;Meade et al., 2017;Villwock et al., 2021) as well as to co-activate written/spoken words while comprehending sign pairs (Lee et al., 2019) and sentences (Hosemann et al., 2020) and in the production of signs (Gimeno-Martínez et al., 2021). Second, there is evidence for cross-modal prediction for spoken languages (Sánchez-García et al., 2011. ...
Article
Full-text available
The objective of this article was to review existing research to assess the evidence for predictive processing (PP) in sign language, the conditions under which it occurs, and the effects of language mastery (sign language as a first language, sign language as a second language, bimodal bilingualism) on the neural bases of PP. This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. We searched peer-reviewed electronic databases (SCOPUS, Web of Science, PubMed, ScienceDirect, and EBSCO host) and gray literature (dissertations in ProQuest). We also searched the reference lists of records selected for the review and forward citations to identify all relevant publications. We searched for records based on five criteria (original work, peer-reviewed, published in English, research topic related to PP or neural entrainment, and human sign language processing). To reduce the risk of bias, the remaining two authors with expertise in sign language processing and a variety of research methods reviewed the results. Disagreements were resolved through extensive discussion. In the final review, 7 records were included, of which 5 were published articles and 2 were dissertations. The reviewed records provide evidence for PP in signing populations, although the underlying mechanism in the visual modality is not clear. The reviewed studies addressed the motor simulation proposals, neural basis of PP, as well as the development of PP. All studies used dynamic sign stimuli. Most of the studies focused on semantic prediction. The question of the mechanism for the interaction between one’s sign language competence (L1 vs. L2 vs. bimodal bilingual) and PP in the manual-visual modality remains unclear, primarily due to the scarcity of participants with varying degrees of language dominance. There is a paucity of evidence for PP in sign languages, especially for frequency-based, phonetic (articulatory), and syntactic prediction. However, studies published to date indicate that Deaf native/native-like L1 signers predict linguistic information during sign language processing, suggesting that PP is an amodal property of language processing. Systematic Review Registration [ https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021238911 ], identifier [CRD42021238911].
... Similar results have been found for American Sign Language -English bilinguals (Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011 ), suggesting that regardless of language combinations, bilinguals implicitly activate both their languages in monolingual contexts. ...
Preprint
Full-text available
Although bilingual children and elderly have been observed to outperform monolinguals in typical executive control tasks, this bilingual advantage is not consistently found in the young adult population. Proponents of the bilingual executive control advantage argue the reason for this is that task demands in the typical tasks used are not high enough, since young adults perform at ceiling level, whereas critics of the effect argue it has benefited from publication bias. Here we test the task-load hypothesis using a standard and a difficult version of the arrow-flanker task and identify stimulus processing characteristics underlying greater bilingual executive control. We increased task demands by using an "Opposite" task in which participants were to respond to the central arrow indicating its opposite direction whilst a task cue indicated which task was to be performed at each trial. Further increase in task difficulty was expected to arise from reducing the task preparation time by using different stimulus-onset-asynchronies between cue and target stimuli. As predicted, we observed no language group differences in the normal flanker task, whereas bilinguals displayed less errors than monolinguals and were less hampered by the difficult task than monolinguals when auditory task cues were used. Event-related potentials (ERPs) revealed that the bilinguals' conflict monitoring response occurred much earlier than the monolinguals' when the task cue was auditory but less so when the cue was visual. Indeed, bilinguals appeared to prioritize the cue signal when it was auditory, but not when it was visual. Further ERP results showed bilinguals displayed greater attentional responses to the target stimulus than monolinguals. Finally, the behavioral and conflict-monitoring ERP responses correlated with language proficiency and usage scores. Together, these results show that when tasks demands are high and auditory processing is part of the task, bilingual adults outperform monolinguals due to better stimulus identification and greater efficiency in managing task demands.
... This result seems consistent with the findings by Ormel et al. (2012) that deaf children are sensitive to phonological information in sign translation equivalents of Dutch words during visual word recognition. Similar results have been found in deaf teenagers (Villwock et al. 2021) and deaf adult readers (Morford et al. 2011(Morford et al. , 2017. This result is also in line with other recent studies showing positive correlations between sign language knowledge and reading abilities (Scott and Hoffmeister 2016;Crume et al. 2021;Keck and Wolgemuth 2020;Holmer et al. 2016). ...
Article
Full-text available
Reading continues to be a challenging task for most deaf children. Bimodal bilingual education creates a supportive environment that stimulates deaf children’s learning through the use of sign language. However, it is still unclear how exposure to sign language might contribute to improving reading ability. Here, we investigate the relative contribution of several cognitive and linguistic variables to the development of word and text reading fluency in deaf children in bimodal bilingual education programmes. The participants of this study were 62 school-aged (8 to 10 years old at the start of the 3-year study) deaf children who took part in bilingual education (using Dutch and Sign Language of The Netherlands) and 40 age-matched hearing children. We assessed vocabulary knowledge in speech and sign, phonological awareness in speech and sign, receptive fingerspelling ability, and short-term memory at time 1 (T1). At times 2 (T2) and 3 (T3), we assessed word and text reading fluency. We found that (1) speech-based vocabulary strongly predicted word and text reading at T2 and T3, (2) fingerspelling ability was a strong predictor of word and text reading fluency at T2 and T3, (3) speech-based phonological awareness predicted word reading accuracy at T2 and T3 but did not predict text reading fluency, and (4) fingerspelling and STM predicted word reading latency at T2 while sign-based phonological awareness predicted this outcome measure at T3. These results suggest that fingerspelling may have an important function in facilitating the construction of orthographical/phonological representations of printed words for deaf children and strengthening word decoding and recognition abilities. Keywords: deafness; reading development; bimodal bilingual education; word reading; text reading; sign language; phonological awareness; vocabulary; fingerspelling
... The authors reported that accuracy and response times were in hibited in nonmatching conditions (i.e., printed word and picture did not match) where the sign translation equivalents of the print-picture pairs had strong sign phonological overlap in comparison to nonmatching print-picture pairs whose NGT sign translation equivalents were phonologically unrelated. Morford, Wilkinson, Villwock, Piñar, and Kroll (2011) report similar findings for bilingual (ASL-English) adult readers. Morford and col leagues (2011) compared processing of written English word pairs with ASL translation equivalents that shared at least two of three phonological parameters (handshape, loca tion, movement) to word pairs with ASL translation equivalents that did not overlap phonologically. ...
Chapter
Full-text available
Assessment is an essential component of an effective bilingual literacy program. The rela­ tionship between language and literacy is complex. For bilingual individuals, the complex­ ity of that relationship is increased. When bilingualism involves a signed language, the re­ lationship becomes even more complicated, and disentangling the critical strands of lan­ guage and literacy learning can be an ongoing challenge. This chapter provides a strengths-based perspective to guide educators in their assessment considerations when developing the literacy abilities of deaf and hard-of-hearing (DHH) bilingual learners, de­ fined as children who are learning a signed language and concurrently a spoken/written language, such as ASL–English. In particular, the chapter explores the valuable ways that signed language abilities contribute to literacy development. Also highlighted is the criti­ cal and ongoing need for effective and culturally responsive signed language measures to better inform literacy teaching approaches. Keywords: assessment, signed language, bilingual deaf education, literacy development, reading
... However, there is also evidence for language modality-specific activations such as greater bilateral network recruitment with additional spatial-related processes in the parietal lobes for sign language processing. Morford et al. [63] found modality-specific interference effects, suggesting that the modality is not completely filtered out early in the decoding process. The other possibility is that, regardless of the language modality and its processing, something in the characteristics of the operation themselves dictate how the core systems are being recruited to support proficiency. ...
Article
Full-text available
Does experience with signed language impact the neurocognitive processes recruited by adults solving arithmetic problems? We used event-related potentials (ERPs) to identify the components that are modulated by operation type and problem size in Deaf American Sign Language (ASL) native signers and in hearing English-speaking participants. Participants were presented with single-digit subtraction and multiplication problems in a delayed verification task. Problem size was manipulated in small and large problems with an additional extra-large subtraction condition to equate the overall magnitude of large multiplication problems. Results show comparable behavioral results and similar ERP dissociations across groups. First, an early operation type effect is observed around 200 ms post-problem onset, suggesting that both groups have a similar attentional differentiation for processing subtraction and multiplication problems. Second, for the posterior-occipital component between 240 ms and 300 ms, subtraction problems show a similar modulation with problem size in both groups, suggesting that only subtraction problems recruit quantity-related processes. Control analyses exclude possible perceptual and cross-operation magnitude-related effects. These results are the first evidence that the two operation types rely on distinct cognitive processes within the ASL native signing population and that they are equivalent to those observed in the English-speaking population.
... This is in line with our results that reading proficiency did not resulted in larger PLE for the deaf participants. It is possible that deaf readers co-activate sign language representations during reading (see for evidence Meade et al., 2017;Morford et al., 2011). Then the outcome of a priming task is modulated by the relationship between sign language lexical items; therefore, skilled deaf readers who automatically co-activate sign language representations will show inhibition 7 for form related sign pairs. ...
Article
Skilled reading is thought to rely on well-specified lexical representations that compete during visual word recognition. The establishment of these lexical representations is assumed to be driven by phonology. To test the role of phonology, we examined the prime lexicality effect (PLE), the index of lexical competition in signing deaf (N = 28) and hearing (N = 28) adult readers of Hungarian matched in age and education. We found no PLE for deaf readers even when reading skills were controlled for. Surprisingly, the hearing controls also showed reduced PLE; however, the effect was modulated by reading skill. More skilled hearing readers showed PLE, while more skilled deaf readers did not. These results suggest that phonology contributes to lexical competition; however, high-quality lexical representations are not necessarily built through phonology in deaf readers.
... Elliott, Braun, Kuhlmann, and Jacobs (2012) suggested that those phonological units, rather than being based on acoustic information, are based on mouth shape-based units acquired through lip-reading. Some studies have shown, upon exposure to a word's written form, deaf readers automatically activate a mental representation of the word's sign translation (Chiu & Wu, 2016;Meade et al., 2017;Morford et al., 2011;Pan et al., 2015;, a process which appears to involve action simulation of the sign in the brain's sensorimotor system (Quandt & Kubicek, 2018). ...
Article
We used an error disruption paradigm to investigate how deaf readers from Hong Kong, who had varying levels of reading fluency, use orthographic, phonological, and mouth-shape-based (i.e., "visemic") codes during Chinese sentence reading while also examining the role of contextual information in facilitating lexical retrieval and integration. Participants had their eye movements recorded as they silently read Chinese sentences containing orthographic, homophonic, homovisemic, or unrelated errors. Sentences varied in terms of how much contextual information was available leading up to the target word. Fixation time analyses revealed that in early fixation measures, deaf readers activated word meanings primarily through orthographic representations. However, in contexts where targets were highly predictable, fixation times on homophonic errors decreased relative to those on unrelated errors, suggesting that higher levels of contextual predictability facilitated early phonological activation. In the measure of total reading time, results indicated that deaf readers activated word meanings primarily through orthographic representations, but they also appeared to activate word meanings through visemic representations in late error recovery processes. Examining the influence of reading fluency level on error recovery processes, we found that, in comparison to deaf readers with lower reading fluency levels, those with higher reading fluency levels could more quickly resolve homophonic and orthographic errors in the measures of gaze duration and total reading time, respectively. We conclude with a discussion of the theoretical implications of these findings as they relate to the lexical quality hypothesis and the dual-route cascaded model of reading by deaf adults.
... However, there is also evidence for language modality-specific activations such as greater bilateral network recruitment with additional spatial-related processes in the parietal lobes for sign language processing. Morford et al. [63] found modality-specific interference effects, suggesting that the modality is not completely filtered out early in the decoding process. The other possibility is that, regardless of the language modality and its processing, something in the characteristics of the operation themselves dictate how the core systems are being recruited to support proficiency. ...
Preprint
Full-text available
In this study, we investigate the impact of experience with a signed language on the neurocognitive processes recruited by adults solving single-digit arithmetic problems. We use event-related potentials (ERPs) to identify the components that are modulated by problem size and operation type in Deaf American Sign Language (ASL) native signers as well as in hearing English-speaking participants. Participants were presented with subtraction and multiplication problems in a delayed verification task. Problem size was manipulated in small and large with an additional extra-large subtraction condition to equate the overall magnitude with large multiplication problems. Results show overall comparable behavioral results across groups and similar ERP dissociations between operation types. First, an early operation type effect is observed between 180ms and 210ms post problem onset, suggesting that both groups have a similar attentional differentiation for processing subtraction and multiplication problems. Second, on the posterior-occipital component between 240ms and 300ms, similarly for both groups only subtraction problems show modulation with problem size suggesting that only this category recruit quantity-related processes. Control analyses exclude this effect as being perceptual and magnitude related. These results are the first evidence that the two operations rely on distinct cognitive processes within the ASL native signing population and this distinction is equivalent to the one observed in the English-speaking population.
... These brain areas have also been shown to activate for signers when tasks do not even involve sign language. Signers show behavioral facilitation (i.e., faster RTs) when comparing printed English words that have similar sign characteristics, but not similar English characteristics (Morford et al., 2011). These behavioral impacts are also seen on the neural level, as when signers read English words, sign language translations are implicitly activated (Meade et al., 2017;Quandt & Kubicek, 2018). ...
Article
Full-text available
Past work investigating spatial cognition suggests better mental rotation abilities for those who are fluent in a signed language. However, no prior work has assessed whether fluency is needed to achieve this performance benefit or what it may look like on the neurobiological level. We conducted an electroencephalography experiment and assessed accuracy on a classic mental rotation task given to deaf fluent signers, hearing fluent signers, hearing non-fluent signers, and hearing non-signers. Two of the main findings of the study are as follows: (1) Sign language comprehension and mental rotation abilities are positively correlated and (2) Behavioral performance differences between signers and non-signers are not clearly reflected in brain activity typically associated with mental rotation. In addition, we propose that the robust impact sign language appears to have on mental rotation abilities strongly suggests that "sign language use" should be added to future measures of spatial experiences.
Article
Full-text available
Since signs and words are perceived and produced in distinct sensory-motor systems, they do not share a phonological basis. Nevertheless, many deaf bilinguals master a spoken language with input merely based on visual cues like mouth representations of spoken words and orthographic representations of written words. Recent findings further suggest that processing of words involves cross-language cross-modal co-activation of signs in deaf and hearing bilinguals. Extending these findings in the present ERP-study, we recorded the electroencephalogram (EEG) of fifteen congenitally deaf bilinguals of German Sign Language (DGS) (native L1) and German (early L2) as they saw videos of semantically and grammatically acceptable sentences in DGS. Within these DGS-sentences, two signs functioned as prime and target. Prime and target signs either had an overt phonological overlap as signs (phonological priming in DGS), or were phonologically unrelated as signs but had a covert orthographic overlap in their written German translation (orthographic priming in German). Results showed a significant priming effect for both conditions. Target signs that were either phonologically related as signs or had an underlying orthographic overlap in their written German translation engendered a less negative going polarity in the electrophysiological signal compared to overall unrelated control targets. We thus provide first evidence that deaf bilinguals co-activate their secondly acquired ‘spoken/written’ language German during whole sentence processing of their native sign language DGS.
Article
An exploratory reading intervention using ASL stories, some with no visual handshape rhymes and others with handshape rhymes, to foster English print vocabulary was evaluated. Four signing deaf students, who were prelingually and profoundly deaf, between the ages of seven and eight years of age and reading at the first-grade level or below were engaged in the intervention. During group story time sessions, stories in American Sign Language (ASL) were presented on PowerPoint slides that included stories translated into both ASL and English, and short lessons using bilingual strategies. Using a pretest-posttest design, the print words were presented within ASL stories across three conditions; 1) with no ASL handshape rhyme, 2) with ASL handshape rhyme, and 3) with English word families (e.g., cat, sat, bat) that rhyme. Students’ vocabulary scores were significantly higher on the ASL stories with handshape rhymes, marginally significant in the non-rhyming ASL stories, and non-significant in the ones with rhyming English word families. This findings point to the importance of rhyme for young deaf children attending ASL/English bilingual programs and suggest that creating ASL stories with rhyme can help to bootstrap literacy. Future directions for research are recommended.
Chapter
Full-text available
The chapter examines the relationship between orthography, phonology, and morphology in Turkish and what this means for Turkish-English bilingual language processing. Turkish offers a unique language medium in pitching theoretical perspectives both in linguistics and psycholinguistics against each other because of its properties. Empirical and theoretical considerations are employed from both domains in order to shed light on some of the current challenges. In line with contemporary thought, this chapter is written with the view that bilingual speakers engage a singular language or lexical system characterized by fluid and dynamic processes. Particular focus will be given to English-Turkish speaking bilinguals in the UK, which includes heritage (HL) and non-heritage language speakers. Evidence from monolingual developmental research as well as neuropsychology will be examined to confirm findings of previous studies in other European contexts, and also to raise attention to various challenges which need to be addressed across all contexts.
Preprint
Full-text available
This study’s primary purpose is to validate and generate insights into emergent and habitual processes for learning, long-term storage, and access to phonological productions or words in the mental lexicon.
Article
In individuals who know more than one language, the languages are always active to some degree. This has consequences for language processing, but bilinguals rarely make mistakes in language selection. A prevailing explanation is that bilingualism is supported by strong cognitive control abilities, developed through long-term practice with managing multiple languages and spilling over into more general executive functions. However, not all bilinguals are the same, and not all contexts for bilingualism provide the same support for control and regulation abilities. This paper reviews research on hearing sign–speech bimodal bilinguals who have a unique ability to use and comprehend their two languages at the same time. We discuss the role of this research in re-examining the role of cognitive control in bilingual language regulation, focusing on how results from bimodal bilingualism research relate to recent findings emphasizing the correlation of control abilities with a bilingual’s contexts of language use. Most bimodal bilingualism research has involved individuals in highly English-dominant language contexts. We offer a critical examination of how existing bimodal bilingualism findings have been interpreted, discuss the value of broadening the scope of this research and identify long-standing questions about bilingualism and L2 learning which might benefit from this perspective.
Article
Deaf signers exhibit an enhanced ability to process information in their peripheral visual field, particularly the motion of dots or orientation of lines. Does their experience processing sign language, which involves identifying meaningful visual forms across the visual field, contribute to this enhancement? We tested whether deaf signers recruit language knowledge to facilitate peripheral identification through a sign superiority effect (i.e., better handshape discrimination in a sign than a pseudosign) and whether such a superiority effect might be responsible for perceptual enhancements relative to hearing individuals (i.e., a decrease in the effect of eccentricity on perceptual identification). Deaf signers and hearing signers or nonsigners identified the handshape presented within a static ASL fingerspelling letter (Experiment 1), fingerspelled sequence (Experiment 2), or sign or pseudosign (Experiment 3) presented in the near or far periphery. Accuracy on all tasks was higher for deaf signers than hearing nonsigning participants and was higher in the near than the far periphery. Across experiments, there were different patterns of interactions between hearing status and eccentricity depending on the type of stimulus; deaf signers showed an effect of eccentricity for static fingerspelled letters, fingerspelled sequences, and pseudosigns but not for ASL signs. In contrast, hearing nonsigners showed an effect of eccentricity for all stimuli. Thus, deaf signers recruit lexical knowledge to facilitate peripheral perceptual identification, and this perceptual enhancement may derive from their extensive experience processing visual linguistic information in the periphery during sign comprehension. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Article
A number of studies provide evidence for a phonological priming effect in the recognition of single signs based on phonological parameters and that the specific phonological parameters modulated in the priming effect can influence the robustness of this effect. This eye tracking study on German Sign Language examined phonological priming effects at the sentence level, while varying the phonological relationship between prime-target sign pairs. We recorded participants' eye movements while presenting videos of sentences containing either related or unrelated prime-target sign pairs, and pictures of the target and an unrelated distractor. We observed a phonologi-cal priming effect for sign pairs sharing handshape and movement while differing in location parameter. Taken together, the data suggest a difference in the contribution of sign parameters to sign recognition and that sub-lexical features influence sign language processing.
Article
Full-text available
The purpose of this study was to research the assumptions about the connection between bilingualism and results in the field of cognitive functioning. Research showing the advantage of bilingual individuals in comparison with monolinguals in cognitive functioning is often explained by the mechanisms that allow bilingual individuals to control and represent the two languages in the brain. Our study included children aged 9 to 11 years: a group of bilingual children who speak Slovene and Hungarian and a control group of monolingual, Slovene speaking children. We tested them with the following cognitive abilities tests: executive functions with TMT and Stroop test, working memory with digit span task forward and backward, verbal abilities with verbal fluency test and vocabulary. The data showed that, although verbal fluency was lower in bilingual group, bilingual children performed better on versions of Stroop task, which could indicate advantage in speed of processing and to lesser extent also in ability of handling conflicting information.
Book
Cambridge Core - Phonetics and Phonology - Sign Language Phonology - by Diane Brentari
Article
The primary goal of research on the functional and neural architecture of bilingualism is to elucidate how bilingual individuals' language architecture is organized such that they can both speak in a single language without accidental insertions of the other, but also flexibly switch between their two languages if the context allows/demands them to. Here we review the principles under which any proposed architecture could operate, and present a framework where the selection mechanism for individual elements strictly operates on the basis of the highest level of activation and does not require suppressing representations in the non-target language. We specify the conjunction of parameters and factors that jointly determine these levels of activation and develop a theory of bilingual language organization that extends beyond the lexical level to other levels of representation (i.e., semantics, morphology, syntax and phonology). The proposed architecture assumes a common selection principle at each linguistic level to account for attested features of bilingual speech in, but crucially also out, of experimental settings.
Article
Full-text available
The article concentrates on searching for missing knowledge about the influence of unbalanced heritage (inherited) bilingualism on the cognitive regulation of a child with high speech activity in a second (Russian) language. The study involved junior schoolchildren 7-8 years old: 1) the main group (N=22) - the children with unbalanced heritage bilingualism (who inherited the Tatar language with different levels of linguistic competence in respect of native and Russian languages), 2) the comparison group (N=30) - monolingual children with speech activity in Russian, who study in educational institutions of the Udmurt Republic. According to the results of empirical research, along with a greater severity of the planning indicator the plasticity and flexibility of the cognitive regulation system of the bilinguals has been established, which provides the integrative potential for regular opportunities. We assume that these advantages arise from the activation of bilingual interaction of language systems.
Chapter
Full-text available
The chapter examines the relationship between orthography, phonology, and morphology in Turkish and what this means for Turkish-English bilingual language processing. Turkish offers a unique language medium in pitching theoretical perspectives both in linguistics and psycholinguistics against each other because of its properties. Empirical and theoretical considerations are employed from both domains in order to shed light on some of the current challenges. In line with contemporary thought, this chapter is written with the view that bilingual speakers engage a singular language or lexical system characterized by fluid and dynamic processes. Particular focus will be given to English-Turkish speaking bilinguals in the UK, which includes heritage (HL) and non-heritage language speakers. Evidence from monolingual developmental research as well as neuropsychology will be examined to confirm findings of previous studies in other European contexts, and also to raise attention to various challenges which need to be addressed across all contexts.
Article
Full-text available
How Deaf children should be taught to read has long been debated. Severely or profoundly Deaf children, who face challenges in acquiring language from its spoken forms, must learn to read a language they do not speak. We refer to this as learning a language via print. How children can learn language via print is not a topic regularly studied by educators, psychologists, or language acquisition theorists. Nonetheless, Deaf children can do this. We discuss how Deaf children can learn a written language via print by mapping print words and phrases to sign language sequences. However, established, time-tested curricula for using a signed language to teach the print forms of spoken languages do not exist. We describe general principles for approaching this task, how it differs from acquiring a spoken language naturalistically, and empirical evidence that Deaf children's knowledge of a signed language facilitates and advances learning a printed language.
Article
In bilingual word recognition, cross-language activation has been found in unimodal bilinguals (e.g., Chinese-English bilinguals) and bimodal bilinguals (e.g., American Sign language-English bilinguals). However, it remains unclear how signs' phonological parameters, spoken words' orthographic and phonological representation, and language proficiency affect cross-language activation in bimodal bilinguals. To resolve the issues, we recruited deaf Chinese sign language (CSL)-Chinese bimodal bilinguals as participants. We conducted two experiments with the implicit priming paradigm and the semantic relatedness decision task. Experiment 1 first showed cross-language activation from Chinese to CSL, and the CSL words' phonological parameter affected the cross-language activation. Experiment 2 further revealed inverse cross-language activation from CSL to Chinese. The Chinese words' orthographic and phonological representation played a similar role in the cross-language activation. Moreover, a comparison between Experiments 1 and 2 indicated that language proficiency influenced cross-language activation. The findings were further discussed with the Bilingual Interactive Activation Plus (BIA+) model, the deaf BIA+ model, and the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS) model.
Experiment Findings
Full-text available
An attempt to replicate and get a more complete empirical background of the material discussed by: Lupker, S. J., Perea, M., & Davis, C. J. (2008). Transposed-letter effects: Consonants, vowels and letter frequency. Language and Cognitive Processes, 23(1), 93–116.
Chapter
Full-text available
Models of lexical access seek to explain how incoming language data is mapped onto long-term lexical representations. The experiment reported here aims to provide insight into which elements of language input are used for mapping onto a sign language lexicon. Rather than using the organs of the vocal tract, sign languages use the arms, hands, body and face to create meaning, combining handshapes, locations and ovements to create meaningful words (signs). This study aims to determine whether these parameters are also used in lexical access processes. Twelve deaf native and twelve deaf non-native signers of British Sign Language (BSL) were presented with a primed lexical decision task. They were required to make a lexical decision about a target sign after viewing a preceding prime that was phonologically related to the target. Analysis of the data suggests that native signers use phonological information in signs in order to access their mental lexicon. Moreover, it appears that the salient parameter in the input is a combination of location and movement – only when both these parameters are shared by prime and target is facilitatory priming observed. There was no evidence that non-native (deaf) signers used phonological parameters to access their lexicon, despite a high degree of success in the lexical decision task at a speed comparable to that of native signers. These findings are discussed in relation to sign language acquisition and the development of phonological theories of signed languages.
Article
Full-text available
A goal of second language (L2) learning is to enable learners to understand and speak L2 words without mediation through the first language (L1). However, psycholinguistic research suggests that lexical candidates are routinely activated in L1 when words in L2 are processed. In this article we describe two experiments that examined the acquisition of L2 lexical fluency. In Experiment 1, two groups of native English speakers, one more and one less fluent in French as their L2, performed word naming and translation tasks. Learners were slower and more error prone to name and to translate words into L2 than more fluent bilinguals. However, there was also an asymmetry in translation performance such that forward translation was slower than backward translation. Learners were also slower than fluent bilinguals to name words in English, the L1 of both groups. In Experiment 2, we compared the performance of native English speakers at early stages of learning French or Spanish to the performance of fluent bilinguals on the same tasks. The goal was to determine whether the apparent cost to L1 reading was a consequence of L2 learning or a reflection of differences in cognitive abilities between learners and bilinguals. Experiment 2 replicated the main features of Experiment 1 and showed that bilinguals scored higher than learners on a measure of L1 reading span, but that this difference did not account for the apparent cost to L1 naming.We consider the implications of these results for models of the developing lexicon.
Article
Full-text available
In recent years, multiple studies have shown that the languages of a bilingual interact during processing. We investigated sign activation as deaf children read words. In a word–picture verification task, we manipulated the underlying sign equivalents. We presented children with word–picture pairs for which the sign translation equivalents varied with respect to sign phonology overlap (i.e., handshape, movement, hand-palm orientation, and location) and sign iconicity (i.e., transparent depiction of meaning or not). For the deaf children, non-matching word–picture pairs with sign translation equivalents that had highly similar elements (i.e., strong sign phonological relations) showed relatively longer response latencies and more errors than non-matching word–picture pairs without sign phonological relations (inhibitory effects). In contrast, matching word–picture pairs with strongly iconic sign translation equivalents showed relatively shorter response latencies and fewer errors than pairs with weakly iconic translation equivalents (facilitatory effects). No such activation effects were found in the word–picture verification task for the hearing children. The results provide evidence for interactive cross-language processing in deaf children.
Article
Full-text available
The present paper summarizes three experiments that investigate the effects of age of acquisition on first-language (L1) acquisition in relation to second-language (L2) outcome. The experiments use the unique acquisition situations of childhood deafness and sign language. The key factors controlled across the studies are age of L1 acquisition, the sensory–motor modality of the language, and level of linguistic structure. Findings consistent across the studies show age of L1 acquisition to be a determining factor in the success of both L1 and L2 acquisition. Sensory–motor modality shows no general or specific effects. It is of importance that the effects of age of L1 acquisition on both L1 and L2 outcome are apparent across levels of linguistic structure, namely, syntax, phonology, and the lexicon. The results demonstrate that L1 acquisition bestows not only facility with the linguistic structure of the L1, but also the ability to learn linguistic structure in the L2.
Article
Full-text available
This study places the predictions of the bilingual interactive activation model (Dijkstra & Van Heuven, 1998) and the revised hierarchical model (Kroll & Stewart, 1994) in the same context to investigate lexical processing in a second language (L2). The performances of two groups of native English speakers, one less proficient and the other more proficient in Spanish, were compared on translation recognition. In this task, participants decided whether two words, one in each language, are translation equivalents. The items in the critical conditions were not translation equivalents and therefore required a “no” response, but were similar to the correct translation in either form or meaning. For example, for translation equivalents such as cara-face, critical distracters included (a) a form-related neighbor to the first word of the pair (e.g., cara-card), (b) a form-related neighbor to the second word of the pair, the translation equivalent (cara-fact), or (c) a meaning-related word (cara-head). The results showed that all learners, regardless of proficiency, experienced interference for lexical neighbors and for meaning-related pairs. However, only the less proficient learners also showed effects of form relatedness via the translation equivalent. Moreover, all participants were sensitive to cues to grammatical class, such that lexical interference was reduced or eliminated when the two words of each pair were drawn from different grammatical classes. We consider the implications of these results for L2 lexical processing and for models of the bilingual lexicon. a
Article
Full-text available
Bilingual speech requires that the language of utterances be selected prior to articulation. Past research has debated whether the language of speaking can be determined in advance of speech planning and, if not, the level at which it is eventually selected. We argue that the reason that it has been difficult to come to an agreement about language selection is that there is not a single locus of selection. Rather, language selection depends on a set of factors that vary according to the experience of the bilinguals, the demands of the production task, and the degree of activity of the nontarget language. We demonstrate that it is possible to identify some conditions that restrict speech planning to one language alone and others that open the process to cross-language influences. We conclude that the presence of language nonselectivity at all levels of planning spoken utterances renders the system itself fundamentally nonselective.
Article
Full-text available
Lexical access is concerned with how the spoken or visual input of language is projected onto the mental representations of lexical forms. To date, most theories of lexical access have been based almost exclusively on studies of spoken languages and/or orthographic representations of spoken languages. Relatively few studies have examined how lexical access takes place in deaf users of signed languages. This paper examines whether two properties, lexical familiarity and phonological neighborhood, which are known to influence recognition in spoken languages, influence lexical access in Spanish Sign Language—Lengua de Signos Espanola (LSE). Our results indicate that the representational factors of lexical familiarity and phonological neighborhood can be observed in native and non-native deaf users of LSE. In addition, the present data provides evidence for the importance of sub-lexical properties in sign language processing.
Article
Full-text available
Past research has established that delayed first language exposure is associated with comprehension difficulties in non-native signers of American Sign Language (ASL) relative to native signers. The goal of the current study was to investigate potential explanations of this disparity: do non-native signers have difficulty with all aspects of comprehension, or are their comprehension difficulties restricted to some aspects of processing? We compared the performance of deaf non-native, hearing L2, and deaf native signers on a handshape and location monitoring and a sign recognition task. The results indicate that deaf non-native signers are as rapid and accurate on the monitoring task as native signers, with differences in the pattern of relative performance across handshape and location parameters. By contrast, non-native signers differ significantly from native signers during sign recognition. Hearing L2 signers, who performed almost as well as the two groups of deaf signers on the monitoring task, resembled the deaf native signers more than the deaf non-native signers on the sign recognition task. The combined results indicate that delayed exposure to a signed language leads to an overreliance on handshape during sign recognition.
Article
Full-text available
Do all components of a sign contribute equally to its recognition? In the present study, misperceptions in the sign-spotting task (based on the word-spotting task; Cutler & Norris, 1988) were analyzed to address this question. Three groups of deaf signers of British Sign Language (BSL) with different ages of acquisition (AoA) saw BSL signs combined with nonsense signs, along with combinations of two nonsense signs. They were asked to spot real signs and report what they had spotted. We will present an analysis of false alarms to the nonsense-sign combinations-that is, misperceptions of nonsense signs as real signs (cf. van Ooijen, 1996). Participants modified the movement and handshape parameters more than the location parameter. Within this pattern, however, there were differences as a function of AoA. These results show that the theoretical distinctions between form-based parameters in sign-language models have consequences for online processing. Vowels and consonants have different roles in speech recognition; similarly, it appears that movement, handshape, and location parameters contribute differentially to sign recognition.
Article
Full-text available
Speech-sign or "bimodal" bilingualism is exceptional because distinct modalities allow for simultaneous production of two languages. We investigated the ramifications of this phenomenon for models of language production by eliciting language mixing from eleven hearing native users of American Sign Language (ASL) and English. Instead of switching between languages, bilinguals frequently produced code-blends (simultaneously produced English words and ASL signs). Code-blends resembled co-speech gesture with respect to synchronous vocal-manual timing and semantic equivalence. When ASL was the Matrix Language, no single-word code-blends were observed, suggesting stronger inhibition of English than ASL for these proficient bilinguals. We propose a model that accounts for similarities between co-speech gesture and code-blending and assumes interactions between ASL and English Formulators. The findings constrain language production models by demonstrating the possibility of simultaneously selecting two lexical representations (but not two propositions) for linguistic expression and by suggesting that lexical suppression is computationally more costly than lexical selection.
Article
Full-text available
A sign decision task, in which deaf signers made a decision about the number of hands required to form a particular sign of American Sign Language (ASL), revealed significant facilitation by repetition among signs that share a base morpheme. A lexical decision task on English words revealed facilitation by repetition among words that share a base morpheme in both English and ASL, but not among those that share a base morpheme in ASL only. This outcome occurred for both deaf and hearing subjects. The results are interpreted as evidence that the morphological principles of lexical organization observed in ASL do not extend to the organization of English for skilled deaf readers.
Article
Full-text available
Whether the native language of bilingual individuals is active during second-language comprehension is the subject of lively debate. Studies of bilingualism have often used a mix of first- and second-language words, thereby creating an artificial “dual-language” context. Here, using event-related brain potentials, we demonstrate implicit access to the first language when bilinguals read words exclusively in their second language. Chinese–English bilinguals were required to decide whether English words presented in pairs were related in meaning or not; they were unaware of the fact that half of the words concealed a character repetition when translated into Chinese. Whereas the hidden factor failed to affect behavioral performance, it significantly modulated brain potentials in the expected direction, establishing that English words were automatically and unconsciously translated into Chinese. Critically, the same modulation was found in Chinese monolinguals reading the same words in Chinese, i.e., when Chinese character repetition was evident. Finally, we replicated this pattern of results in the auditory modality by using a listening comprehension task. These findings demonstrate that native-language activation is an unconscious correlate of second-language comprehension. • bilingualism • event-related potentials • language access • semantic priming • unconscious priming
Chapter
When a book is translated, the meaning of the original should be preserved in the words of the target language.
Article
The deaf community is widely heterogeneous in its language bapkground. Widespread variation in fluency exists even among users of American Sign Language (ASL), the natural gestural language used by deaf people in North America. This variability is a source of unwanted "noise" in many psycholinguistic and pedagogical studies. Our aim is to develop a quantitative test of ASL fluency to allow researchers to measure and make use of this variability. We present a new test paradigm for assessing ASL fluency modeled after the Speaking Grammar Subtest of the Test ofAdolescent and Adult Language, 3'd Edition (TOAL3; Hammill, Brown, Larsen, & Wiederholt, 1994). The American Sign Language-Sentence Reproduction Test (ASL-SRT) requires participants to watch computer-displayed video clips of a native signer signing sentences of increasing length and complexity. After viewing each sentence, the participant has to sign back the sentence just viewed. We review the development of appropriate test sentences, rating procedures and inter-rater reliability, and show how our preliminary version of the test already distinguishes between hearing and deaf users of ASL, as well as native and non-native users.
Article
Three experiments are reported in which picture naming and bilingual translation were performed in the context of semantically categorized or randomized lists. In Experiments 1 and 3 picture naming and bilingual translation were slower in the categorized than randomized conditions. In Experiment 2 this category interference effect in picture naming was eliminated when picture naming alternated with word naming. Taken together, the results of the three experiments suggest that in both picture naming and bilingual translation a conceptual representation of the word or picture is used to retrieve a lexical entry in one of the speaker's languages. When conceptual activity is sufficiently great to activate a multiple set of corresponding lexical representations, interference is produced in the process of retrieving a single best lexical candidate as the name or translation. The results of Experiment 3 showed further that category interference in bilingual translation occurred only when translation was performed from the first language to the second language, suggesting that the two directions of translation engage different interlanguage connections. A model to account for the asymmetric mappings of words to concepts in bilingual memory is described. (C) 1994 Academic Press, Inc.
Article
This book is written primarily for those studying linguistic topics in the area of sign language, but also can be useful to sign language teachers who want to understand more about American Sign Language (ASL). Pen-and-ink illustrations allow the reader with no knowledge of sign language to follow the discussion. The hypothesis examined in this study is the following: As with oral languages, lexical borrowing from one manual language into another is accompanied by lexical restructuring in accordance with the formational and morphological principles of the borrowing language. This study examines some English words that are fingerspelled by signers and physically change to become ASL signs in a systematic and predictable manner. This implies that the process of word borrowing and restructuring in ASL is highly similar to the same process in spoken languages. The focus of this study is on the formational aspects of signing. An analysis of loan signs and the English influence that prompts their borrowing also depends on the social world of signers, which is discussed in terms of those aspects of social interaction that create ASL-English bilinguals. The chapters include: (1) Analyzing Signs, (2) Signs in Action, (3) Social Issues, (4) Loan Signs from Fingerspelled Words, and (5) Analysis and Discussion. Appended are illustrations of the handshapes of fingerspelling and a table of symbols used in "A Dictionary of American Sign Language." (NCR)
Article
Presented signs of American Sign Language in list lengths of 3-7 items to deaf college students whose native language is American Sign Language, in a short-term memory experiment. A comparable experiment, using words, was presented to 8 hearing college students. Recall was written, immediate, and ordered. Overall, short-term memory mechanisms in the deaf seem to parallel those found in hearing Ss, even with the modality change. A significant number of multiple intrusion errors made by deaf Ss to signs were based on formational properties of the signs themselves, a result paralleling the phonologicaly based errors in experiments with hearing Ss. Results are consistent with a theory that the signs of American Sign Language are actually coded by the deaf in terms of simultaneous formational parameters such as hand configuration, place of articulation, and movement. Evidence is given that signs are treated by the deaf as consisting of independent parameters--specific to American Sign Language--which are essentially arbitrary in terms of meaning. (25 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Two eye-tracking experiments examined spoken language processing in Russian-English bilinguals. The proportion of looks to objects whose names were phonologically similar to the name of a target object in either the same language (within-language competition), the other language (between-language competition), or both languages at the same time (simultaneous competition) was compared to the proportion of looks in a control condition in which no objects overlapped phonologically with the target. Results support previous findings of parallel activation of lexical items within and between languages, but suggest that the magnitude of the between-language competition effect may vary across first and second languages and may be mediated by a number of factors such as stimuli, language background, and language mode.
Article
In 2 experiments, relatively proficient Chinese-English bilinguals decided whether Chinese words were the correct translations of English words. Critical trials were those on which incorrect translations were related in lexical form or meaning to the correct translation. In Experiment 1, behavioral interference was revealed for both distractor types, but event-related potentials (ERPs) revealed a different time course for the 2 conditions. Semantic distractors elicited effects primarily on the N400 and late positive component (LPC), with a smaller N400 and a smaller LPC over the posterior scalp but a larger LPC over the anterior scalp relative to unrelated controls. In contrast, translation form distractors elicited a larger P200 and a larger LPC than did unrelated controls. To determine whether the translation form effects were enabled by the relatively long, 750-ms stimulus onset asynchrony (SOA) between words, a 2nd ERP experiment was conducted using a shorter, 300-ms, SOA. The behavioral results revealed interference for both types of distractors, but the ERPs again revealed different loci for the 2 effects. Taken together, the data suggest that proficient bilinguals activate 1st-language translations of words in the 2nd language after they have accessed the meaning of those words. The implications of this pattern for claims about the nature of cross-language activation when bilinguals read in 1 or both languages are discussed.
Article
The effects of knowledge of sign language on co-speech gesture were investigated by comparing the spontaneous gestures of bimodal bilinguals (native users of American Sign Language and English; n = 13) and non-signing native English speakers (n = 12). Each participant viewed and re-told the Canary Row cartoon to a non-signer whom they did not know. Nine of the thirteen bimodal bilinguals produced at least one ASL sign, which we hypothesise resulted from a failure to inhibit ASL. Compared with non-signers, bimodal bilinguals produced more iconic gestures, fewer beat gestures, and more gestures from a character viewpoint. The gestures of bimodal bilinguals also exhibited a greater variety of handshape types and more frequent use of unmarked handshapes. We hypothesise that these semantic and form differences arise from an interaction between the ASL language production system and the co-speech gesture system.
Article
Bilinguals often outperform monolinguals on nonverbal tasks that require resolving conflict from competing alternatives. The regular need to select a target language is argued to enhance executive control. We investigated whether this enhancement stems from a general effect of bilingualism (the representation of two languages) or from a modality constraint that forces language selection. Bimodal bilinguals can, but do not always, sign and speak at the same time. Their two languages involve distinct motor and perceptual systems, leading to weaker demands on language control. We compared the performance of 15 monolinguals, 15 bimodal bilinguals, and 15 unimodal bilinguals on a set of flanker tasks. There were no group differences in accuracy, but unimodal bilinguals were faster than the other groups; bimodal bilinguals did not differ from monolinguals. These results trace the bilingual advantage in cognitive control to the unimodal bilingual's experience controlling two languages in the same modality.
Simultaneous production of American sign language and English costs the speaker but benefits the perceiver
  • K Emmorey
  • J Petrich
  • T Gollan
Emmorey, K., Petrich, J., & Gollan, T. (2009, July). Simultaneous production of American sign language and English costs the speaker but benefits the perceiver. In Paper presented at the 7th international symposium on bilingualism, Utrecht, The Netherlands.
Lexical borrowing in American sign language. Silver Spring, MD: Linstok Press Remembering in signs
  • R Battison
Battison, R. (1978). Lexical borrowing in American sign language. Silver Spring, MD: Linstok Press. Bellugi, U., Klima, E. S., & Siple, P. (1975). Remembering in signs. Cognition, 3, 93–125.
American sign language -sentence reproduction test: Development and implications
  • P Hauser
  • R Paludneviciene
  • T Supalla
  • D Bavalier
Hauser, P., Paludneviciene, R., Supalla, T., & Bavalier, D. (2008). American sign language -sentence reproduction test: Development and implications. In R. M. de Quadros (Ed.), Sign language: Spinning and unraveling the past, present and future (pp. 160-172). Petrópolis, Brazil: Arara Azul.