ArticlePDF Available

Abstract and Figures

What is the time course of cross-language activation in deaf sign–print bilinguals? Prior studies demonstrating cross-language activation in deaf bilinguals used paradigms that would allow strategic or conscious translation. This study investigates whether cross-language activation can be eliminated by reducing the time available for lexical processing. Deaf ASL–English bilinguals and hearing English monolinguals viewed pairs of English words and judged their semantic similarity. Half of the stimuli had phonologically related translations in ASL, but participants saw only English words. We replicated prior findings of cross-language activation despite the introduction of a much faster rate of presentation. Further, the deaf bilinguals were as fast or faster than hearing monolinguals despite the fact that the task was in their second language. The results allow us to rule out the possibility that deaf ASL–English bilinguals only activate ASL phonological forms when given ample time for strategic or conscious translation across their two languages.
Content may be subject to copyright.
Bilingualism: Language and Cognition: page 1 of 14 C
Cambridge University Press 2015 doi:10.1017/S136672891500067X
The time course of
cross-language activation in
deaf ASL–English bilinguals
JILL P. MORFORD
Department of Linguistics, University of New Mexico, USA
NSF Science of Learning Center on Visual Language and
Visual Learning (VL2)
CORRINE OCCHINO-KEHOE
Department of Linguistics, University of New Mexico, USA
NSF Science of Learning Center on Visual Language and
Visual Learning (VL2)
PILAR PIÑAR
Department of World Languages and Cultures, Gallaudet
University, USA
NSF Science of Learning Center on Visual Language and
Visual Learning (VL2)
ERIN WILKINSON
Department of Linguistics, University of Manitoba, Canada
NSF Science of Learning Center on Visual Language and
Visual Learning (VL2)
JUDITH F. KROLL
Department of Psychology, Pennsylvania State University, USA
NSF Science of Learning Center on Visual Language and
Visual Learning (VL2)
(Received: April 8, 2015; final revision received: August 28, 2015; accepted: August 31, 2015)
What is the time course of cross-language activation in deaf sign–print bilinguals? Prior studies demonstrating
cross-language activation in deaf bilinguals used paradigms that would allow strategic or conscious translation. This study
investigates whether cross-language activation can be eliminated by reducing the time available for lexical processing. Deaf
ASL–English bilinguals and hearing English monolinguals viewed pairs of English words and judged their semantic
similarity. Half of the stimuli had phonologically related translations in ASL, but participants saw only English words. We
replicated prior findings of cross-language activation despite the introduction of a much faster rate of presentation. Further,
the deaf bilinguals were as fast or faster than hearing monolinguals despite the fact that the task was in their second
language. The results allow us to rule out the possibility that deaf ASL–English bilinguals only activate ASL phonological
forms when given ample time for strategic or conscious translation across their two languages.
Keywords: bilingualism, word recognition, sign language, deaf
One of our teenagers recently walked in from a hard
workout and remarked off-hand, “Hey, Mom, when you
say you have sore muscles in German, you’re really saying
your muscles have a hangover.” Muskelkater in German
We would like to thank the participants of our research, as well
as Selina Agyen, Benjamin Anible, Richard Bailey, Brian Burns,
Yunjae Hwang, Teri Jaquez, Carla Ring, and Paul Twitchell for help
in programming, data collection, coding and analysis. Portions of
this study were presented at the 11th Theoretical Issues in Sign
Language Research Conference in London, England. This research
was supported by the National Science Foundation Science of
Learning Center Program, under cooperative agreement numbers
SBE-0541953 and SBE-1041725. The writing of this article was
also supported in part by NIH Grant HD053146 and NSF Grants
BCS-0955090 and OISE-0968369 to Judith F. Kroll. Any opinions,
findings, and conclusions or recommendations expressed are those of
the authors and do not necessarily reflect the views of the National
Institutes of Health or the National Science Foundation.
Address for correspondence:
Jill P. Morford, Department of Linguistics, MSC03 2130, University of New Mexico, Albuquerque, NM 87131-0001, USA
morford@unm.edu
is indeed a compound word, consisting of the base forms
for muscle and hangover. This reflection by a German–
English bilingual belies the complex relationship between
the two languages of a bilingual. Generally, bilinguals
are not conscious of words from their two languages
competing for recognition in a monolingual context,
but converging evidence indicates that bilingual lexical
processing is typically language non-selective (Dijkstra
& Van Heuven, 2002; Jared & Kroll, 2001;Marian&
Spivey, 2003; Thierry & Wu, 2007; Van Wijnendaele
& Brysbaert, 2002). In other words, whether speaking
or listening, reading or writing, bilinguals activate word
forms in both languages. However, the degree of activation
of words across the two languages depends on both lexical
form and meaning similarities. Cognates, or words that
have similar form and meaning, as in Muskel and muscle,
show more robust patterns of cross-language activation
than translation equivalents that don’t share a similar
2Jill P. Morford, Corrine Occhino-Kehoe, Pilar Piñar, Erin Wilkinson & Judith F. Kroll
form, such as Kater and hangover (DeGroot & Nas,
1991; Lemhöfer, Dijkstra, Schriefers, Baayen, Grainger
& Zwitserlood, 2008).
Evidence for non-selective activation comes from a
variety of sources. For the purposes of this article, we
focus on studies that investigate second language (L2)
visual word recognition in adult bilinguals. In one of
the earliest studies to investigate non-selective access,
Nas (1983) asked Dutch–English bilinguals to perform
a lexical decision task in their second language, English.
Participants were slower and made more errors rejecting
non-words if they were pseudohomophones of Dutch
words. In this particular instance, the non-target language
was activated due to shared orthographic and phonological
lexical forms across the two languages. In other studies,
translation equivalents sharing little form similarity, such
as Dutch paard and English horse, can also be activated
cross-linguistically (De Groot & Nas, 1991), but these
effects disappear in a masked priming protocol. The most
robust effects of cross-language activation are found when
both form and meaning align in the two languages, as in
the case of cognates. Lemhöfer and colleagues (2008)
compared within-language and cross-language effects on
English word recognition in a progressive demasking
paradigm with native speakers of French, German and
Dutch. Of the cross-language factors, they found that
cognate status but not L1 orthographic neighborhood
(number of high- and low-frequency neighbors, total
number and summed frequency of neighbors) accounted
for significant amounts of variation in response time.
Several theoretical models have been proposed to
represent the structure of the bilingual lexicon and to make
predictions about how bilinguals process and produce
language. These models are, for the most part, based on
bilinguals who use two spoken languages and built on the
assumption that there is form overlap between the two
languages of a bilingual. A more stringent test of whether
cross-language activation is typical of bilingual language
processing can be made by investigating bilinguals whose
languages have little or no form similarity because they
are produced in two different modalities, as is the case
for spoken and signed languages. While signed languages
have phonological forms1, they do not share sensory or
motor expressions with spoken language phonological
forms. Writing systems for signed languages are not
widespread, so orthographic similarities between signed
and spoken languages are also lacking. Several recent
studies have found that cross-language activation occurs
even in the absence of phonological or orthographic
overlap in the signed and spoken languages used by
hearing and deaf signing bilinguals (Emmorey, Borinstein,
1The sublexical structure of signs is traditionally described with
four formational parameters: handshape, location, movement and
orientation (Battison, 1978; Stokoe, Croneberg, & Casterline, 1965).
Thompson & Gollan, 2008;Kubu¸s, Villwock, Morford
& Rathmann, 2015; Morford, Kroll, Piñar & Wilkinson,
2014; Morford, Wilkinson, Villwock, Piñar & Kroll,
2011; Ormel, Hermans, Knoors & Verhoeven, 2012;
Shook & Marian, 2012). In light of this new evidence,
current bilingual models need to be re-evaluated. In this
article, we address how orthographic word forms activate
phonological word forms cross-lingually in bilinguals
whose languages have no phonological or orthographic
overlap because one of their languages is a written
language and the other one a signed language. Specifically,
we investigate how the time course of cross-language
activation in these bilinguals can inform current bilingual
lexical processing models.
We have coined the term SIGNPRINT BILINGUALS to
refer to deaf bilinguals who use a signed language as the
primary language for face-to-face communication, and the
written form of a spoken language for reading and writing
(Morford et al., 2011,2014; Piñar, Dussias & Morford,
2011). This term highlights several unique characteristics
about this population. First, it distinguishes deaf signing
bilinguals from hearing signing bilinguals, referred to as
BIMODAL BILINGUALS (Bishop & Hicks, 2005; Johnson,
Watkins & Rice, 1992). The term BIMODAL emphasizes
the relationship of auditory-vocal and visual-manual
representations, which is critical to understanding how
hearing bilinguals relate their knowledge of speech and
signs, but is less relevant for deaf bilinguals. Second,
the term SIGNPRINT BILINGUALS emphasizes language-
dominance in the signing modality, by including the
reference to sign in the first position. The use of
the term PRINT as opposed to SPOKEN LANGUAGE is
selected to highlight the fact that the visual form of
the spoken language is often the primary access to
the second language (Hoffmeister & Caldwell-Harris,
2014; Kuntze, 2004; Supalla, Wix & McKee, 2001).
As with any bilingual population, sign–print bilinguals
are heterogeneous with respect to the specific age
of first exposure to and fluency attained in both the
dominant and non-dominant languages, and language
dominance is likely to fluctuate across the lifespan.
The term SIGNPRINT BILINGUALS is not intended to
rule out the possibility of effects of residual auditory
experience; further, sign–print bilinguals may rely on
knowledge of articulatory and even acoustic patterns
in the spoken language. In sum, this term is selected
to highlight important characteristics of the population
under study, but not to make theoretical claims that these
characteristics are the only factors that influence language
representation and use for deaf bilingual signers who have
knowledge of a spoken language.
For sign–print bilinguals, evidence of non-selective
access comes from tasks demonstrating activation
of phonological forms in a signed language while
participants are processing orthographic word forms
Time course of cross-language activation 3
from a spoken language. Even in a monolingual
written task that can be completed without reference to
the participant’s signed language, bilingual signers are
influenced by the phonological form of the sign language
translation equivalents of the written word stimuli. The
relationship between print and sign cannot be explained
by mappings between shared or overlapping phonological
or orthographic systems, as is the case for hearing
bilinguals who know two spoken languages. Thus, this
result raises important questions about what precisely
is activated when deaf bilinguals read written words.
More specifically, how are written forms from a spoken
language related to sign language knowledge?
If we take hearing bilinguals as a starting point
for predicting the relationship between print and signs,
it would be logical to expect orthographic word
forms to activate the associated sub-lexical and lexical
phonological forms of the spoken language, since the
orthography is designed to capture these phonological
regularities. Subsequently, those phonological forms,
particularly the lexical phonological forms, would activate
lexical phonological forms in the signed language through
lateral connections, as well as through top-down activation
from shared semantics. This path of activation would
be consistent with the BIA+ model of bilingual lexical
access (Dijkstra & Van Heuven, 2002;seeFigure 1). If
this model is correct, then deaf bilinguals, like hearing
bilinguals, should be somewhat slower than monolinguals
during lexical processing tasks (Bijeljac Babic, Biardeau
& Grainger, 1997), in part due to the additional processing
costs incurred by activation of the non-target language.
Further, Dijkstra & Van Heuven (2002: 183–4) predict
that “an absence of cross-linguistic phonological and
semantic effects for different words could occur if task
demands allow responding to faster codes (for instance,
orthographic L1 codes), giving slower codes no chance to
affect the response times.” In other words, tasks allowing
faster processing of the target stimulus should be less
likely to show an influence of cross-language activation
than tasks requiring more protracted processing.
One of the unique characteristics of sign–print
bilinguals concerns the relationship between orthographic
and phonological representations. Orthographic represen-
tations are typically restricted to the spoken language
while phonological representations are much richer for
the signed language. Further, the L2 orthography will
not have a regular and predictable mapping to the L1
signed language phonology, particularly at a sublexical
level. Traditional studies of reading in the deaf population
have assumed that orthographic forms activate spoken
phonological forms just as in hearing readers (e.g.,
Colin, Magnan, Ecalle & Leybaert, 2007; Leybaert, 1993;
Perfetti & Sandak, 2000). Following this assumption, we
might modify the BIA+ model to specify that signed
phonological forms are activated only subsequently to
spoken phonological forms (see Figure 1: Alternative
1). An even more extreme interpretation of this view
would hold that signed languages are so different from
spoken languages in their phonological representations
that only conscious translation of English orthographic
forms into ASL could explain the activation of ASL
phonological forms with no lateral connections between
the phonological lexical forms of the two languages.
Alternative 1 could be criticized on the basis of the
fact that deaf bilinguals are unlikely to have rich spoken
language phonological representations due to their limited
auditory experience. If spoken language phonological
forms are eliminated from this model, then the possibility
arises that deaf bilinguals could map orthographic
lexical forms directly to semantics without activating
phonological word forms at all (see Figure 1: Alternative
2). This alternative model predicts that deaf bilinguals
might be significantly faster to process written words than
hearing monolinguals or bilinguals due to less diffusion
of activation through the lexicon, and less competition of
lexical alternatives during the course of word recognition.
However, if direct mappings between orthographic word
forms and semantics are the ONLY associations impacting
lexical access, then any facilitation or inhibition due to
activation of phonological forms from the signed language
would be post-lexical in nature. For instance, Morford and
colleagues (2011,2014) have shown that phonological
similarity between the sign translations of word pairs
presented in print affects the reaction times of sign–
print bilinguals. Specifically these bilinguals are slower
at making semantic similarity judgments of English
word pairs that are semantically unrelated but whose
sign translations are phonologically related than when
the sign translations are also phonologically unrelated.
Correspondingly, reaction times to semantically related
pairs are faster when the sign translations are also
phonologically related. A model that predicts direct
mappings between orthographic and semantic forms
bypassing phonology completely would only predict this
type of evidence if the participants were able to activate
the ASL lexical forms subsequent to accessing shared
semantic representations.
An alternative to either of these views has been
put forward by Ormel (2008;cf.Ormeletal.,2012),
who proposes vertical connections between lexical
orthographic form representations and semantics, and
lateral connections between lexical orthographic forms
and lexical phonological forms in the signed language
(See Figure 1: Alternative 3). In a separate study,
Hermans, Ormel, Knoors & Verhoeven (2008) propose
a model of vocabulary development in deaf bilingual
signers in which L2 orthographic lexical forms are initially
mapped directly to L1 phonological lexical forms (signed
phonological forms). In order to comprehend written word
forms, signers in early stages of development mediate
4Jill P. Morford, Corrine Occhino-Kehoe, Pilar Piñar, Erin Wilkinson & Judith F. Kroll
Figure 1. Three alternative models of bilingual word recognition in deaf sign-print bilinguals based on the BIA+ model
(Dijkstra & Van Heuven, 2002)
their comprehension of written words by activating signed
phonological forms. Subsequently, deaf bilingual signers
acquire L2 spoken phonological forms, and learn to
associate these forms with meaning. They also develop
direct access of meaning from L2 orthographic forms.
Not until a third and final stage in development do deaf
children create associations between L2 phonological
and L2 orthographic lexical forms. The implication
of this model for adult sign–print bilinguals is that
although L2 orthographic lexical forms will activate
L2 phonological lexical forms, the most entrenched
associations are between L2 orthographic lexical forms
and semantics as well as L1 signed phonological forms
(Kubu¸s et al., 2015; Morford et al., 2014). If adult sign–
print bilinguals have weaker associations from print to
spoken phonological forms (i.e., L2 phonological forms)
than to signed phonological forms (i.e., L1 phonological
forms) as a result of their acquisition history, this slightly
modified version of Ormel’s model makes yet another
set of predictions for patterns of activation from print
to sign. Deaf bilinguals should be as fast responding
to L2 orthographic forms as hearing monolinguals are
when responding to L1 orthographic forms by virtue
of dampened activation of spoken phonological forms
and reduced competition of lexical alternatives. However,
cross-language phonological priming and inhibition
effects should be quite robust, occurring soon after
uptake of the input language according to this model
since L2 orthographic forms became associated with L1
phonological forms earlier in the development of the
lexicon than with L2 phonological forms.
The current study explores these possible config-
urations of bilingual lexical access for deaf sign–
print bilinguals. While the predictions are illustrated as
variations on the BIA+ model, our claims are not tied
to a PDP architecture per se. Similar alternatives could
be illustrated with a model that uses a connectionist
architecture, such as Shook & Marian’s (2013) Bilingual
Language Interaction Network for Comprehension of
Speech (BLINCS). In BLINCS, the relative strengths
of L2 orthography to L1 phonology vs. L2 phonology
mappings would emerge across training epochs due
Time course of cross-language activation 5
to sparse L2 phonological representations during L2
orthographic exposure.
Our study compares the overall speed and accuracy of
decisions to English written word forms between hearing
English monolinguals and deaf ASL–English bilinguals,
as a diagnostic for the activation of phonological forms.
Additionally, by manipulating the time course of a
semantic similarity judgment task in which participants
see pairs of English words, we explore whether deaf
bilinguals exhibit cross-language phonological priming
effects at both shorter and longer time courses. Guo,
Misra, Tam & Kroll (2012) have pointed out that in a
variety of bilingual lexical processing tasks, evidence
of activation of the L1 phonological form is associated
with a long SOA, while studies using a shorter SOA have
not found evidence of L1 phonological form activation,
particularly in highly skilled bilinguals (Sunderman &
Kroll, 2006; Talamas, Kroll & Dufour, 1999), suggesting
that, at least at higher levels of L2 proficiency, word
recognition in the L2 does not need to be mediated by
L1 lexical form activation.
Most studies documenting cross-language activation,
including Guo et al. (2012), present word forms from
both languages (De Groot & Nas, 1991;Nas,1983).
With explicit activation of both languages, even masked
priming has been shown to elicit cross-language activation
(Bijeljac Babic et al., 1997; Brysbaert, Van Dyck &
Van de Poel, 1999). But studies that have not presented
any stimuli in the non-target language, as in our study,
have always used long SOAs. Wu & Thierry (2010)
used SOAs of 1000 to 1200 ms with a monolingual
semantic similarity judgment task (cf. Thierry & Wu,
2004,2007). Similarly long SOAs were used in studies
requiring bilinguals to complete a monolingual letter-
counting task (Martin, Costa, Dering, Hoshino, Wu &
Thierry, 2012) and to decide whether a target is a
geometric figure or a word (Wu, Cristino, Leek & Thierry,
2013). Prior studies specifically looking at cross-language
activation in deaf sign–print bilinguals have also used a
SOA of 1000 ms (Kubu¸s et al., 2015; Morford et al.,
2011,2014). In the current study, we shortened this SOA
to 750 ms in the long SOA condition, and following
Guo et al. (2012), we selected 300 ms for the short
SOA condition. Our prediction is that at the long SOA,
we will replicate previous lexical co-activation effects
for sign–print bilinguals, since there would be sufficient
time for the ASL L1 translations to become activated
regardless of the architecture of the bilingual lexicon. In
the short SOA condition, if English orthographic forms
are not directly associated with ASL phonological forms
(Alternatives 1 & 2), we predict no L1 phonology co-
activation effects. By contrast, a direct mapping of L2
orthographic forms to L1 phonological forms (Alternative
3) predicts that effects of cross-language activation would
be detected even with a much shorter SOA. No prior
studies have attempted to uncover effects of cross-
language activation using a monolingual task with such a
short SOA. If deaf sign–print bilinguals complete the task
as quickly as monolinguals and nevertheless show the
cross-language activation effects, this will provide initial
evidence for direct mappings between L2 orthography and
L1 phonology in sign–print bilinguals.
Method
Participants
There were three groups of participants. The first group
consisted of 29 DEAF BALANCED BILINGUALS, a group
of deaf signers who were highly proficient in ASL and
English. The second group consisted of 24 DEAF ASL-
DOMINANT BILINGUALS, a group of deaf signers who
were more proficient in ASL than in English. The third
group consisted of 43 HEARING MONOLINGUALS,who
had acquired English from birth, and had no knowledge
of ASL.
The deaf bilingual participants were recruited from
Gallaudet University, and were paid $20/hour for their
participation in the experiment. Ninety participants were
recruited through fliers targeting students who consider
themselves bilingual in ASL and English. Data from 37
participants were eliminated: 5 due to equipment failure
or experimenter error, 9 due to failure to complete the
protocol or follow directions, 4 due to onset of deafness
after age 2, 9 due to low accuracy (less than 80%
correct) on the experimental task, and 10 due to low
accuracy (less than 45% correct)2on the ASL assessment
task. ASL proficiency was assessed with the American
Sign Language - Sentence Repetition Task (ASL-SRT,
Hauser, Paludneviˇ
cien˙
e, Supalla & Bavelier, 2008). The
remaining 53 participants were grouped into BALANCED
and ASL-DOMINANT bilingual groups based on their
performance on the Passage Comprehension subtest of
the Woodcock–Johnson III Tests of Achievement (WJ)
which was used to assess English proficiency. Twenty-nine
participants scored 35 (Grade equivalent 8.9) or above on
the WJ and were assigned to the BALANCED BILINGUAL
group. Twenty-four participants scored 34 or below and
were assigned to the ASL-DOMINANT BILINGUAL group.
Table 1 lists the number of females, the mean age, mean
ASL-SRT and mean WJ score for each group.
The HEARING MONOLINGUAL group was recruited
from the undergraduate psychology pool at Penn State
University, and completed the experiment for course
2The ASL-SRT is undergoing standardization and item analysis. A
sample (n =23) of native signers scored 66% correct on the initial
version of the test, with a standard deviation of 10.2%. The minimum
accuracy score on the ASL-SRT for the current study was selected by
calculating two s.d. below the mean for native signers.
6Jill P. Morford, Corrine Occhino-Kehoe, Pilar Piñar, Erin Wilkinson & Judith F. Kroll
Table 1 . Mean (sd) and Range of Background Characteristics of Deaf Bilinguals
Participant Group
n
(# female)
Age
[range]
ASL Proficiency
(ASL-SRT)
English Proficiency
(Woodcock-Johnson,
Subtest 9)
29 22 76% (12%) 38 (2.4)
Deaf Balanced
Bilinguals
(24 female) [19, 51] [45%, 95%] [35, 43]
24 24 66% (15%) 31 (2.7)
Deaf ASL-dominant
Bilinguals
(12 female) [18, 46] [45%, 90%] [23, 34]
credit. The average age of the 43 participants (37 female)
was 19 (range [18, 21]). Criteria for inclusion in the study
included being a native speaker of English, having no
prior knowledge of ASL nor any degree of proficiency
in another signed language, and no history of hearing
or speech disorders. All hearing participants also reported
being born in the US, and speaking English as the primary
language in the home. They rated their reading ability in
English on a scale of 1 to 10. Average self-rating for
reading ability in English was at ceiling at 9.79 (s=.51).
Materials
The materials included 440 English word pairs with
phonologically related (n =220) and phonologically
unrelated (n =220) translation equivalents in ASL.
Hearing native speakers of English rated word pairs on
a scale from 1 (semantically unrelated) to 7 (semantically
related). Word pairs with mean ratings below 2.8 were
classified as semantically unrelated (n =170), and
pairs with mean ratings above 4.2 were classified as
semantically related (n =196). Phonological similarity
was defined as sharing a minimum of two formational
parameters, either handshape and location, handshape
and movement, or location and movement. English word
pairs with phonologically-related and unrelated ASL
translations did not differ in frequency or length (see
Table 2 ).
Procedure
Participants first completed a background questionnaire.
Deaf participants were then given the language assessment
tasks. The experimental task was programmed in E-
Prime Professional 2.0 (Psychology Software Tools,
Inc., Sharpsburg, PA). Ten practice trials with feedback
on accuracy preceded the experiment. Participants had
to obtain 80% accuracy before proceeding to the
experimental trials. Two blocks of experimental trials
were presented. One block used a short stimulus onset
asynchrony (SOA, 300 ms) and one block used a long
SOA (750 ms). The order of blocks and the assignment
of stimulus lists to blocks was counterbalanced across
participants such that half of the participants completed
the short SOA block prior to the long SOA block, and
vice versa. The practice trials were programmed to match
the SOA of the first block that participants completed.
Each participant responded only once to a target word
pair, either in the short or the long SOA condition.
Experimental trials began with a 500 ms fixation cross.
In the short SOA condition, the first stimulus word was
presented for 250 ms followed by a 50 ms interstimulus
interval (ISI). The second stimulus word was presented
and remained on the screen until participants responded.
In the long SOA condition, the first stimulus word was
presented for 250 ms followed by a 500 ms ISI. The
second word remained on the screen until participants
responded. Participants were asked to determine whether
the English word pairs were semantically related or
unrelated. Participants responded by selecting a keyboard
button labeled ‘yes’ with their dominant hand, or ‘no’ with
their non-dominant hand. Because semantically related
and unrelated trials were analyzed separately, we did not
counterbalance responses across hands.
To minimize the influence of lexical variation in
ASL on the experimental results, deaf participants
translated English targets into ASL after completing
the experimental task. ASL translations were used
to eliminate trials on which participants’ signs did
not conform to the condition criteria of phonological
relatedness. For example, for the English target candy,
most signers produced a sign in which the index finger
contacts the cheek and then is rotated. This sign is closely
phonologically related to the ASL translation of the
English word bored, produced with the same handshape
and movement but located at the side of the nose. This pair
of targets was included in the semantically unrelated but
phonologically related condition (candy-bored). However,
several signers produced the polysemous ASL sign
commonly glossed sugar in response to the English
Time course of cross-language activation 7
Table 2 . Lexical characteristics of the English stimuli by condition
Semantically Unrelated Semantically Related
Phonologically Phonologically Phonologically Phonologically
Unrelated Related t-test Unrelated Related t-test
Semantic Similarity Rating (1 – 7) 1.25 (.66) 1.36 (.71) n.s. 5.40 (.60) 5.39 (.61) n.s.
Word length (# letters) 5.7 (1.8) 5.9 (2.2) n.s. 5.8 (2.2) 6.1 (2.3) n.s.
HAL Log Frequency 9.8 (1.7) 9.9 (1.6) n.s. 10.1 (1.6) 9.9 (1.7) n.s.
target candy. This sign is produced at the chin with all
fingers extended instead of just the index, and thus is
not phonologically related to the ASL translation of the
English word bored. For the specific participants who
completed the translation task using the sugar variant in
response to candy, we eliminated the candy-bored trial.
Response times two standard deviations above and below
the mean for each participant were also excluded. This
resulted in the exclusion of 4.9% of the responses of the
BALANCED BILINGUAL group, 8.7% of the responses of
the ASL-DOMINANT BILINGUAL group, and 4.6% of the
HEARING MONOLINGUAL group.
Results
Response time and accuracy data were analyzed using
a 3-way mixed ANOVA across participants (F1) and
items (F2), with repeated measures on the within-
subjects variables, PHONOLOGY (related vs. unrelated
translation equivalents in ASL) and STIMULUS ONSET
ASYNCHRONY -SOA(300 ms, 750 ms). The between-
subjects variable was GROUP (Balanced bilinguals, ASL-
dominant bilinguals, Hearing monolinguals). An analysis
of the effect of the TYPE of phonological relationship
on semantic similarity judgments, i.e., whether the
translation equivalents overlapped in handshape, location
or movement, is reported in Occhino-Kehoe et al. (in
preparation). We analyzed the semantically unrelated and
the semantically related conditions separately since the
former condition required a no-response and the latter
condition required a yes-response.
Semantically unrelated condition
In the semantically unrelated condition, there was a
significant main effect of Group on response time, F1
(2, 93) =4.04, p<.05, η2P=.08, F2 (2, 336) =
237.99, p<.001, η2P=.586. Deaf balanced bilinguals
responded significantly faster (779 ms) than the hearing
monolinguals (892, ms, p<.01). Deaf ASL-dominant
bilinguals were slower than the balanced bilinguals but
faster than the monolinguals; however, they did not differ
significantly from either of the other groups (848 ms).
The effect of Group was modified by an interaction of
Group and Phonology, F1 (2, 93) =6.07, p<.01,
η2P=.12, F2 (2, 336) =3.22, p<.05, η2P=.019.
Replicating the effects of Morford et al. (2011,2014),
balanced and ASL-dominant bilinguals were significantly
slower when responding to semantically unrelated English
word pairs with phonologically related (793 ms, 863
ms respectively) than with unrelated (764 ms, 833 ms
respectively) ASL translations. The monolingual group
showed no effect of Phonology (892 ms vs. 891 ms in the
related and unrelated conditions respectively). Note that
the two bilingual groups were faster in both phonology
conditions than the monolinguals suggesting that they are
faster even when a conflict in semantic and phonological
relatedness is slowing their performance.
There was no significant main effect of SOA. To
test the hypothesis that English print initially activates
only English phonological forms, and subsequently
activates ASL phonological forms, we completed paired
comparisons to determine whether the effect of Phonology
on the deaf bilinguals was present at the long but not the
short SOA. In the subject analysis, the effect of Phonology
was significant at both SOAs, but in the item analysis, the
effect of Phonology was significant only for the long SOA
(see Tables 3a and 3b).
For the semantically unrelated items, there was not a
significant effect of Group on accuracy. However, there
was a significant main effect of Phonology on accuracy,
F1 (1, 93) =37.25, p<.001, η2P=.29, F2 (1, 168) =
4.67, p<.05, η2P=.03. Performance on the English word
pairs with phonologically unrelated ASL translations
was more accurate than on the English word pairs with
phonologically related ASL translations. This effect was
modulated by an interaction of Phonology and Group, F1
(2, 93) =5.38, p<.01, η2P=.10, F2 (2, 336) =4.54,
p<.02, η2P=.03. As with response time, only the deaf
participants were affected by the Phonology manipulation.
Deaf participants made more errors when the English
word pairs had phonologically related ASL translations
(Balanced 12.2%, ASL-dominant 14.9%) than unrelated
translations (Balanced 7.5%, ASL-dominant 10.7%).
The monolinguals had similar error rates in the two
conditions (11.7% phonologically related in ASL, 10.7%
8Jill P. Morford, Corrine Occhino-Kehoe, Pilar Piñar, Erin Wilkinson & Judith F. Kroll
Table 3a. Mean RT (sd) in ms by Group at Short and Long SOAs in the Semantically Unrelated Condition
Short SOA (300 ms) Long SOA (750 ms)
Phonologically Phonologically Phonologically Phonologically
Related in ASL Unrelated in ASL Related in ASL Unrelated in ASL
Deaf Balanced Bilinguals 793 (192) 764 (190) 794 (186) 765 (178)
Deaf ASL-dominant Bilinguals 866 (162) 843 (161) 859 (180) 824 (170)
Hearing Monolinguals 879 (176) 876 (162) 906 (162) 907 (170)
Table 3 b. Effect of Phonology at Short and Long SOAs in the Semantically Unrelated Condition
Short SOA (300 ms) Long SOA (750 ms)
Subject Item Subject Item
Analysis Analysis Analysis Analysis
29 ms 22 ms 29 ms 41 ms
Deaf Balanced Bilinguals p<.01 n.s. p<.02 p<.01
23 ms 13 ms 35 ms 40 ms
Deaf ASL-dominant Bilinguals p<.05 n.s. p<.01 p<.02
3ms 9ms 2ms 4ms
Hearing Monolinguals n.s. n.s. n.s. n.s.
Table 3c. Mean Error Rate (sd) by Group at Short and Long SOAs in the Semantically Unrelated Condition
Short SOA (300 ms) Long SOA (750 ms)
Phonologically Phonologically Phonologically Phonologically
Related in ASL Unrelated in ASL Related in ASL Unrelated in ASL
Deaf Balanced Bilinguals 10.8% (5.0) 7.8% (4.7) 13.5% (7.1) 7.1% (5.8)
Deaf ASL-dominant Bilinguals 15.3% (9.3) 11.9% (8.7) 14.5% (8.3) 9.4% (8.0)
Hearing Monolinguals 11.5% (6.3) 9.7% (6.4) 11.9% (6.3) 11.7% (6.5)
phonologically unrelated in ASL). For the accuracy
results, the groups had comparable accuracy rates in
the phonologically unrelated condition, but the bilingual
participants made more errors than the monolingual group
in the phonologically related condition. No other effects
or interactions reached significance (see Table 3c).
Semantically related condition
Turning to the semantically related condition, the main
effect of Group was significant by items, F2 (2, 388) =
65.00, p<.001, η2P=.251, but only approached
significance in the analysis by participants, F1 (2, 93) =
2.52, p=.09, η2P=.051. Replicating prior studies that
have found that cross-language phonological similarity
can enhance performance in sign–print bilinguals
(Morford et al., 2011,2014), there was a significant
interaction of Phonology and Group on reaction time, F1
(2, 93) =14.14, p<.001, η2P=.23, F2 (2, 388) =9.65, p
<.001, η2P=.05. Balanced bilinguals and ASL-dominant
bilinguals were faster when responding to semantically
related English word pairs with phonologically related
ASL translations than to English words with unrelated
ASL translations (708 vs. 758 ms for Balanced Bilinguals,
p<.05; 746 vs. 805 ms for ASL-dominant bilinguals,
p<.01). Monolinguals, by contrast, did not differ in the
two conditions (791 ms vs. 801 ms). There was also a
main effect of SOA on reaction time, F1 (2, 93) =7.23,
p<.01, η2P=.07, F2 (2, 388) =16.06, p<.001, η2P=
.08. Responses were faster at the shorter SOA (758 ms)
compared to the longer SOA (777 ms). SOA did not
interact with Group or Phonology. Paired comparisons
evaluating the effect of Phonology at the short and long
SOAs for the two bilingual g roups produced mixed results.
For the balanced bilinguals, the effect of Phonology was
significant at the short SOA only in the subject analysis,
but not in the item analysis. The effect of Phonology at
the long SOA was significant in both subject and item
Time course of cross-language activation 9
Table 4a. Mean RT (sd) in ms by Group at Short and Long SOAs in the Semantically Related Condition
Short SOA (300 ms) Long SOA (750 ms)
Phonologically Phonologically Phonologically Phonologically
Related in ASL Unrelated in ASL Related in ASL Unrelated in ASL
Deaf Balanced Bilinguals 700 (121) 741 (124) 715 (109) 775 (136)
Deaf ASL-dominant Bilinguals 742 (125) 809 (122) 749 (141) 800 (148)
Hearing Monolinguals 776 (117) 782 (120) 806 (121) 819 (119)
Table 4 b. Effect of Phonology at Short and Long SOAs in the Semantically Related Condition
Short SOA (300 ms) Long SOA (750 ms)
Subject Item Subject Item
Analysis Analysis Analysis Analysis
Deaf Balanced Bilinguals 41 ms 27 ms 60 ms 39 ms
p<.001 n.s. p<.001 p<.05
Deaf ASL-dominant Bilinguals 67 ms 54 ms 51 ms 36 ms
p<.001 p<.01 p<.001 p=.065
Hearing Monolinguals 6 ms 4 ms 13 ms 2 ms
n.s. n.s. n.s. n.s.
analyses. For the ASL-dominant bilinguals, the effect
of Phonology was significant in both subject and item
analyses at the short SOA, but at the long SOA, the effect
of Phonology was significant in the subject analysis, but
only approached significance in the item analysis (see
Tables 4a and 4b).
The only effect to reach significance in the analysis
of the accuracy data in the semantically related condition
was a main effect of Group, F1 (2, 93) =5.37, p<
.01, η2P=.10, F2 (2, 388) =20.44, p<.001, η2P=
.10. The Hearing Monolinguals made fewer errors
(14.5%) than either the Balanced Bilinguals (19.2%) or
the ASL-dominant bilinguals (18.6%). Group differences
in accuracy may reflect the fact that hearing monolinguals
encounter English words in a broader range of contexts
due to their extensive exposure to both spoken and written
English, whereas the deaf bilinguals have more restricted
exposure to English. The two bilingual groups did not
differ in their accuracy (see Table 4c).
Discussion
We investigated how orthographic word forms activate
phonological word forms in deaf sign–print bilinguals
and hearing English monolinguals. Study participants
completed an English semantic similarity judgment
task. Half of the stimuli had phonologically related
translations in ASL, allowing us to evaluate whether ASL
phonological forms were active during the processing
of English orthographic word forms. Critically, we
manipulated the time course of the experiment to
determine whether the activation of ASL phonological
forms by English orthographic forms could be eliminated
if participants did not have sufficient time to engage in
post-lexical activation of ASL phonological forms, or
in conscious translation. We replicated prior findings of
cross-language activation between English print and ASL
signs despite the introduction of a much faster rate of
presentation of the stimuli. The results allow us to rule
out the possibility that deaf ASL–English bilinguals only
activate ASL phonological forms when given ample time
for strategic or conscious translation across their two
languages.
Further, we found that deaf bilinguals who are highly
proficient in both ASL and English performed the
experimental task much faster than hearing monolinguals
with no cost to accuracy. The difference between
these populations was particularly pronounced when
participants were rejecting word pairs that were
semantically unrelated. In this condition, deaf balanced
bilinguals were more than 100 ms faster than hearing
monolinguals on average. This result is particularly
interesting since the phonological manipulation had
no impact on the monolinguals but actually slowed
performance of the deaf bilinguals when the English
word pairs had phonologically similar translations in ASL.
This result should allay any concerns that bilingualism
may have negative consequences for language processing
10 Jill P. Morford, Corrine Occhino-Kehoe, Pilar Piñar, Erin Wilkinson & Judith F. Kroll
Table 4c. Mean Error Rate (sd) by Group at Short and Long SOAs in the Semantically Related Condition
Short SOA (300 ms) Long SOA (750 ms)
Phonologically Phonologically Phonologically Phonologically
Related in ASL Unrelated in ASL Related in ASL Unrelated in ASL
Deaf Balanced Bilinguals 19.5% (6.7) 17.6% (8.9) 20.8% (7.9) 18.8% (9.8)
Deaf ASL-dominant Bilinguals 19.5% (8.5) 17.3% (7.5) 19.0% (9.6) 18.7% (9.8)
Hearing Monolinguals 15.6% (8.6) 12.3% (7.2) 16.5% (7.4) 13.6% (7.8)
in deaf individuals. Indeed, the deaf ASL signers
are faster responding to English words than English
monolinguals. Further, the timing differences between
the deaf and hearing participants makes it highly
unlikely that ASL phonological forms are only activated
after English phonological forms and semantics have
both been activated (Figure 1: Alternative 1). The deaf
balanced bilinguals were significantly faster than the
hearing monolinguals, and the deaf ASL-dominant
bilinguals were comparable in reaction time to the
hearing monolinguals. While the faster performance of the
balanced bilinguals could be argued to reflect a selection
effect of including only highly proficient bilinguals
in this group, the fact that the ASL-dominant group
performed as quickly in their non-dominant language as
the monolingual group performed in their dominant (and
only) language makes the relative response times of the
two groups more compelling. If the translation equivalents
were activated post-lexically, then inhibition introduced
through the activation of the translation equivalents
should be apparent in protracted response times of the
experimental groups relative to the monolingual control
group. This was not the case.
Two characteristics of language processing in deaf
bilinguals may be relevant to understanding why deaf
bilinguals are so fast in processing English orthographic
forms. First, despite many claims that deaf readers must
recode orthographic forms to spoken phonological forms
in order to become good readers (Perfetti & Sandak, 2000;
Wang et al., 2008), recent studies indicate that activation
of spoken phonological codes does not distinguish good
and poor deaf readers (Allen, Clark, del Giudice, Koo,
Lieberman, Mayberry & Miller, 2009; Chamberlain &
Mayberry, 2008; Mayberry, del Giudici & Lieberman,
2011). Indeed, Bélanger, Baum & Mayberry (2012) found
that deaf readers relied on orthographic but not spoken
language phonological representations in both pre-lexical
(masked priming) and post-lexical (recall) visual word
processing tasks. Second, a recent ERP study of lexical
access of ASL by deaf signers indicates that semantics
may be accessed earlier in signers than speakers, and
that phonological form processing of signs may be linked
with semantic processing more closely than has been
found for lexical access of spoken words (Gutierrez,
Williams, Grosvald & Corina, 2012). Our finding that
deaf bilinguals are as fast or faster than monolingual
speakers of English to process English print words while
also exhibiting a cross-language activation effect makes
the proposal that deaf bilinguals’ ASL knowledge is
merely an appendage to their representation of English
orthography and phonology untenable. The question is
how signed phonological representations are integrated
with the orthographic and phonological representations of
a spoken language, and whether this integration changes
the time course of processing of orthographic word forms.
We investigated this question by manipulating the time
course of the experiment. Importantly, the study results
replicate prior studies showing that ASL phonological
forms were activated in a monolingual English task
(Morford et al., 2011;2014). The evidence for non-
selective lexical access in deaf sign–print bilinguals is
quite robust. In past studies using the semantic similarity
judgment paradigm in participants’ L2, the stimulus
onset asynchrony (SOA) has been comparatively long,
about 1000 ms (Thierry & Wu, 2007; Morford et al.,
2011). Guo et al. (2012) investigated whether effects
of L1 form interference in the translation recognition
task can be eliminated by reducing the SOA from
750 ms to 300 ms in an ERP study of Chinese–
English bilinguals. At both SOAs, they found behavioral
evidence of form and meaning interference in translation
recognition performance, but the ERP record differed
across the two conditions. In the short SOA condition,
there was no significant effect of the form distractor on the
P200. Guo et al. interpreted the results as evidence that
bilinguals were not accessing the L1 translations of the
L2 stimuli in order to complete the task. Their behavioral
results do show effects of cross-language activation at the
short SOA, but unlike the current study, their participants
were presented with lexical forms in both languages, a
target word and a translation or distractor from the non-
target language.
We chose to implement a similar manipulation of SOA,
this time in a monolingual task, comparing performance
on English semantic similarity judgments when the
second word was presented 300 ms or 750 ms after the
Time course of cross-language activation 11
first. At the long SOA, even a serial model positing an
initial activation of an English phonological form and/or
semantics prior to an ASL phonological form would be
consistent with the co-activation results that we found,
since there would be sufficient time for the ASL translation
of the first stimulus to become activated, as well as all
the phonological neighbors of that sign, which would
presumably include the ASL translation of the second
English stimulus. However, even at the shorter SOA, deaf
bilinguals’ responses were influenced by the activation
of L1 ASL phonological forms. These results are more
consistent with a model of bilingual lexical access in
which English orthographic forms directly activate ASL
phonological forms (Figure 1: Alternative 3) than an
indirect activation of ASL phonological forms subsequent
to the activation of English phonological forms (Figure 1:
Alternatives 1 or 2).
Studies of hearing monolinguals and bilinguals have
generally proposed earlier activation of phonological than
of semantic representations on the basis of orthographic
word forms. Perfetti & Tan (1998), for example, argue that
orthographic-phonological relationships are privileged
over orthographic-semantic relationships due to the fact
that the former are more reliable and the latter are more
context-dependent. In other words, a single orthographic
form is more likely to have multiple meanings than
multiple pronunciations, so readers are more confident
in selecting the appropriate phonological form associated
with the orthographic form than the meaning. For deaf
readers, this assumption should be questioned. If deaf
readers are activating phonological forms from the spoken
language, these forms may be as variable and unreliable
as semantic representations due to restricted auditory
experience, or due to the ontogenetic time course of the
development of these representations (i.e., orthographic
words are acquired prior to spoken phonological words).
Alternatively, if deaf readers are activating phonological
forms from the signed language, it is the relationship
between sign phonology and semantics that is more
predictable than the relationship between a spoken
language orthography and a signed language phonology.
Given the close relationship of form and meaning in
signed languages (Wilcox, 2004),itmayberashtoassume
earlier activation of spoken phonological representations
than of semantic representations for deaf bilinguals during
written word processing. One approach that could bring
new insights to this question is to attempt masked priming
(cf. Bélanger et al., 2012), but to select the stimuli so that
the orthographic prime is designed to activate a cross-
language competitor to the target’s translation equivalent.
Studies of hearing bilinguals who acquire spoken
languages with different orthographies, such as Japanese–
English (Hoshino & Kroll, 2008) and Chinese–English
(Wu & Thierry, 2010) bilinguals, have found that overlap
in the written form of words in two spoken languages
is not a requirement for cross-language activation. The
phonology of word forms in two spoken languages
is activated by orthographic forms even when the
orthography is not shared. Our study demonstrates
that overlap in the phonological forms is also not a
prerequisite of cross-language activation. It may be
precisely because the cross-language phonological forms
and/or articulatory routines activated by the orthographic
string do not overlap that cross-language activation is so
robust in sign–print bilinguals. With little need to inhibit
lexical alternatives from multiple languages activated
by print, deaf bilinguals may experience reinforcement
from the simultaneous activation of signed and spoken
phonological forms during print word processing. This
interpretation is consistent with studies demonstrating
faster processing of polysemous words and cognates
relative to control words (Eddington & Tokowicz, 2015;
Lemhöfer et al., 2008) and to studies demonstrating lexical
consolidation across modalities (Bakker, Takashima, van
Hell, Janzen & McQueen, 2014). Shook & Marian
(2012) have proposed a similar explanation for early
parallel activation of signed and spoken phonological
representations in hearing bimodal bilinguals. They used
a visual world paradigm to explore whether hearing
bimodal bilinguals would engage in parallel activation of
ASL while listening to spoken English. They found very
early activation of the ASL labels of the objects pictured
in the display, and draw a parallel between the cross-
modal activation of visual phonological representations in
ASL and spoken phonological representations in English
with the activation of orthographic and phonological
representations in monolinguals. Our results are not cross-
modal – both English orthographic and ASL phonological
forms are visual. Nevertheless, what is shared by all of
these studies is very early and robust parallel activation
when forms are not in competition.
Finally, these results suggest that a model of the
bilingual lexicon will not be able to account for lexical
processing in all bilinguals unless it can accommodate
direct mappings between L2 orthographic word forms
and L1 phonological word forms. Such mappings may
be inconsequential for bilinguals who have access to the
phonological system that an orthography was designed
to capture but, in the absence of those phonological
representations, bilinguals appear to be able to build
robust associations between phonological forms from a
different language to the target orthographic word forms.
In other words, associations between phonological and
orthographic representations need not be merely those
that the orthographic system was designed to capture.
Presumably, this aspect of the configuration of the mental
lexicon could apply to hearing bilinguals as well, but
only in contexts in which there is sufficient exposure to
meaningful orthographic word forms without exposure to
the phonological word forms captured by the orthography.
12 Jill P. Morford, Corrine Occhino-Kehoe, Pilar Piñar, Erin Wilkinson & Judith F. Kroll
In sum, the results replicate and extend previous studies
demonstrating cross-language activation in deaf sign–
print bilinguals (Kubu¸s et al., 2015; Morford et al., 2014;
Morford et al., 2011; Ormel et al., 2012). Deaf sign–
print bilinguals but not hearing English monolinguals
respond faster to semantically similar English word
pairs with phonologically related ASL translations and
slower to semantically unrelated English word pairs
with phonologically related ASL translations than with
phonologically unrelated ASL translations. The current
study replicated this pattern even though the time
course of presentation of the English stimuli was much
shorter (300 ms) than in prior studies (1000 ms). The
results contribute to the growing support for language
nonselective lexical access in deaf sign–print bilinguals.
Further, the study found evidence that deaf sign–
print bilinguals are considerably faster than hearing
monolinguals in making decisions about English words.
This novel finding is an indication that deaf bilinguals may
benefit from differences in lexical access unique to signed
languages that extend to the processing of orthographic
words. Namely, stronger, more direct connections between
semantics and L1 phonology in signers may, in turn,
result in faster connections between L2 orthography
and semantics. Whether activation of L1 phonological
forms is necessary for this pattern of activation to
become established, or whether it is independent of
L1 phonological word form processing, requires further
investigation.
Together, the combined results of the co-activation
effects and the speed of lexical processing help to
distinguish between the various models of bilingual word
recognition proposed in the introduction to this article.
Specifically, the pattern of results is not consistent with
a model of word recognition in which orthographic
strings would activate sublexical phonological units in
the spoken language prior to activating lexical level
representations in the signed language. Of the proposed
models, these data are most consistent with a model
in which orthographic word forms activate semantics
as well as signed phonological forms (Ormel, 2008,
Ormel et al., 2012), and in which spoken phonological
forms are less entrenched and less likely to shape lexical
processing in a deterministic manner. The fact that
phonological representations in the two languages have
different underlying motor and sensory representations
could allow for a unique configuration of the bilingual
lexicon that is specific to sign–print bilinguals.
References
Allen, T. E., Clark, M. D., del Giudice, A., Koo, D., Lieberman,
A., Mayberry, R., & Miller, P. (2009). Phonology and
reading: A response to Wang, Trezek, Luckner, and Paul.
American Annals of the Deaf, 154, 338–345.
Bakker, I., Takashima, A., van Hell, J. G., Janzen, G.,
& McQueen, J. M. (2014). Competition from unseen
or unheard novel words: Lexical consolidation across
modalities. Journal of Memory and Language, 73, 116–
130.
Battison, R. (1978). Lexical borrowing in American Sign
Language. Silver Spring, MD: Linstok Press.
Bélanger, N. N., Baum, S. R., & Mayberr y, R. I. (2012). Reading
difficulties in adult deaf readers of French: Phonological
codes, not guilty! Scientific Studies of Reading, 16, 263–
285.
Bijeljac Babic, R., Biardeau, A., & Grainger, J. (1997). Masked
orthographic priming in bilingual word recognition.
Memory & Cognition, 25, 447–457.
Bishop, M., & Hicks, S. (2005). Orange Eyes: Bimodal
bilingualism in hearing adults from Deaf families. Sign
Language Studies, 5, 188–230.
Brysbaert, M., Van Dyck, G., & Van de Poel, M. (1999).
Visual word recognition in bilinguals: Evidence from
masked phonological priming. Journal of Experimental
Psychology: Human Perception and Performance, 25, 137–
148.
Chamberlain, C., & Mayberry, R. I. (2008). ASL syntactic and
narrative comprehension in skilled and less skilled adults
readers: Bilingual-bimodal evidence for the linguistic
basis of reading. Applied Psycholinguistics, 28, 537–
549.
Colin, S., Magnan, A., Ecalle, J., & Leybaert, J. (2007). Relation
between deaf children’s phonological skills in kindergarten
and word recognition performance in first grade. Journal
of Child Psychology and Psychiatry, 48, 139–146.
De Groot, A. M., & Nas, G. L. (1991). Lexical representation of
cognates and noncognates in compound bilinguals. Journal
of Memory and Language, 30, 90–123.
Dijkstra, T., & van Heuven, W. J. B. (2002). The architecture of
the bilingual word recognition system: From identification
to decision. Bilingualism: Language and Cognition, 5, 175–
197.
Eddington, C. M., & Tokowicz, N. (2015). How meaning
similarity influences ambiguous word processing: The
current state of the literature. Psychonomic Bulletin &
Review, 22, 13–37.
Emmorey, K., Borinstein, H. B., Thompson, R., & Gollan, T. H.
(2008). Bimodal bilingualism. Bilingualism: Language and
cognition, 11, 43–61.
Guo, T., Misra, M., Tam, J. W., & Kroll, J. F. (2012). On the
time course of accessing meaning in a second language:
An electrophysiological and behavioral investigation
of translation recognition. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 38, 1165–
1186.
Gutierrez, E., Williams, D., Grosvald, M., & Corina, D. (2012).
Lexical access in American Sign Language: An ERP
investigation of effects of semantics and phonology. Brain
Research, 1468, 63–83.
Hauser, P. C., Paludneviˇ
cien˙
e, R., Supalla, T., & Bavelier, D.
(2008). American Sign Language-Sentence Reproduction
Test. In R. M. de Quadros (ed.), Sign languages: Spinning
and unraveling the past, present and future. TISLR 9, forty-
five papers and three posters from the 9. Theoretical Issues
Time course of cross-language activation 13
in Sign Language Research Conference, Florianopolis,
Brazil, December 2006, pp. 160–172. Petrópolis/RJ, Brazil:
Editora Arara Azul.
Hermans, D., Knoors, H., Ormel, E., & Verhoeven, L. (2008).
The relationship between the reading and signing skills of
deaf children in bilingual education programs. Journal of
Deaf Studies and Deaf Education, 13, 518–530.
Hoffmeister, R. J., & Caldwell-Harris, C. L. (2014). Acquiring
English as a second language via print: The task for deaf
children. Cognition, 132, 229–242.
Hoshino, N., & Kroll, J. F. (2008). Cognate effects in picture
naming: Does cross-language activation survive a change
of script? Cognition, 106, 501–511.
Jared, D., & Kroll, J. F. (2001). Do bilinguals activate
phonological representations in one or both of their
languages when naming words? Journal of Memory and
Language, 44, 2–31.
Johnson, J. M., Watkins, R. V., & Rice, M. L. (1992). Bimodal
bilingual language development in a hearing child of deaf
parents. Applied Psycholinguistics, 13, 31–52.
Kubu¸s, O., Villwock, A., Morford, J. P., & Rathmann,
C. (2015). Word recognition in deaf readers: Cross-
language activation of German Sign Language and
German. Applied Psycholinguistics,36, 831–854,
doi:10.1017/S0142716413000520.
Kuntze, M. (2004). Literacy acquisition and deaf children: A
study of the interaction of ASL and written English. PhD
dissertation, Stanford University.
Lemhöfer, K., Dijkstra, T., Schriefers, H., Baayen, R. H.,
Grainger, J., & Zwitserlood, P. (2008). Native language
influences on word recognition in a second language: A
megastudy. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 34, 12–31.
Leybaert, J. (1993). Reading in the deaf: The roles of
phonological codes. In M. Marschark & M. D. Clark
(eds.), Psychological perspectives on deafness, pp. 269–
309. Hillsdale, NJ: Erlbaum.
Marian, V., & Spivey, M. J. (2003). Competing activation
in bilingual language processing: Within- and between-
language competition. Bilingualism: Language and
Cognition, 6, 97–115.
Martin, C. D., Costa, A., Dering, B., Hoshino, N., Wu, Y. J., &
Thierry, G. (2012). Effects of speed of word processing
on semantic access: The case of bilingualism. Brain And
Language, 120, 61–65.
Mayberry, R. I., del Giudice, A. A., & Lieberman, A. M. (2011).
Reading achievement in relation to phonological coding
and awareness in deaf readers: A meta-analysis. Journal of
Deaf Studies and Deaf Education, 16, 164–188.
Morford, J. P., Kroll, J. F., Piñar, P., & Wilkinson, E. (2014).
Bilingual word recognition in deaf and hearing signers:
Effects of proficiency and language dominance on cross-
language activation. Second Language Research, 30, 251–
271.
Morford, J. P., Wilkinson, E., Villwock, A., Piñar, P., & Kroll,
J. F. (2011). When deaf signers read English: Do written
words activate their sign translations? Cognition, 118, 286–
292.
Nas, G. (1983). Visual word recognition in bilinguals: Evidence
for a cooperation between visual and sound based codes
during access to a common lexical store. Journal of Verbal
Learning & Verbal Behavior, 22, 526–534.
Ormel, E. (2008). Visual word recognition in bilingual
deaf children. PhD dissertation, Radboud University
Nijmegen.
Ormel, E., Hermans, D., Knoors, H., & Verhoeven, L. (2012).
Cross-Language effects in visual word recognition: The
case of bilingual deaf children. Bilingualism: Language
and Cognition, 15, 288–303.
Perfetti, C. A., & Sandak, R. (2000). Reading optimally builds
on spoken language: Implications for deaf readers. Journal
of Deaf Studies and Deaf Education, 5, 32–50.
Perfetti, C. A., & Tan, L. H. (1998). The time course
of graphic, phonological, and semantic activation in
Chinese character identification. Journal of Experimental
Psychology: Learning, Memory, and Cognition, 24, 101–
118.
Piñar, P., Dussias, P. E., & Morford, J. P. (2011). Deaf readers
as bilinguals: An examination of deaf readers’ print
comprehension in light of current advances in bilingualism
and second language processing. Language and Linguistics
Compass, 5, 691–704.
Shook, A., & Marian, V. (2012). Bimodal bilinguals co-activate
both languages during spoken comprehension. Cognition,
124, 314–324.
Shook, A., & Marian, V. (2013). The Bilingual Language
Interaction Network for Comprehension of Speech.
Bilingualism: Language and Cognition, 16, 304–324.
Stokoe, W., Croneberg, C., & Casterline, D. (1965). A dictionary
of American Sign Language on linguistic principles.
Washington, DC: Gallaudet College Press.
Sunderman, G., & Kroll, J. F. (2006) First language
activation during second language lexical processing: An
investigation of lexical form meaning and grammatical
class. Studies in Second Language Acquisition, 28, 387–
422.
Supalla, S. J., Wix, T. R., & McKee, C. (2001). Print as a
primary source of English for deaf learners. In J. Nicol
& D. T. Langendoen (eds.), One Mind, Two Languages:
Studies in Bilingual Language Processing, pp. 177–190.
Oxford: Blackwell Publishing.
Talamas, A., Kroll, J. F., & Dufour, R. (1999). From form to
meaning: Stages in the acquisition of second-language
vocabulary. Bilingualism: Language and Cognition, 2, 45–
58.
Thierry, G., & Wu, Y. J. (2004). Electrophysiological evidence
for language interference in late bilinguals. NeuroReport,
15, 1555–1558.
Thierry, G., & Wu, Y. J. (2007). Brain potentials
reveal unconscious translation during foreign-language
comprehension. Proceedings of the National Academy of
Sciences, 104, 12530–12535.
Van Wijnendaele, I., & Brysbaert, M. (2002). Visual word
recognition in bilinguals: Phonological priming from the
second to the first language. Journal of Experimental
Psychology: Human Perception and Performance, 28, 616–
627.
14 Jill P. Morford, Corrine Occhino-Kehoe, Pilar Piñar, Erin Wilkinson & Judith F. Kroll
Wilcox, S. (2004). Cognitive iconicity: Conceptual spaces,
meaning, and gesture in signed languages. Cognitive
Linguistics, 15, 119–147.
Wu, Y. J., Cristino, F., Leek, C., & Thierry, G. (2013). Non-
selective lexical access in bilinguals is spontaneous and
independent of input monitoring: Evidence from eye
tracking. Cognition, 129, 418–425.
Wu, Y., & Thierry, G. (2010). Chinese-English bilinguals
reading English hear Chinese. The Journal of Neuroscience,
30, 7646–7651.
... Children responded slower and were less accurate when words and pictures were phonologically related in NGT than when they were unrelated. These findings have since been replicated in various other studies with deaf signers (e.g., Kubuş, Villwock, Morford & Rathmann, 2015;Meade, Midgley, Sevcikova Sehyr, Holcomb & Emmorey, 2017;Morford, Kroll, Piñar & Wilkinson, 2014;Morford, Occhino-Kehoe, Piñar, Wilkinson & Kroll, 2017), and in studies with native and non-native hearing signers (e.g., Giezen, Blumenfeld, Shook, Marian & Emmorey, 2015;Shook & Marian, 2012;Villameriel, Dias, Costello & Carreiras, 2016;Williams & Newman, 2015). Thus, cross-language activation is not only a robust characteristic of bilingual processing in bilinguals of spoken languages, but also in deaf and hearing signers, which we will refer to as 'bimodal bilinguals'. ...
... For bilinguals of two spoken languages, one likely source of co-activation is through phonological overlap between words in different languages, as seen in, for example, cross-language phonological priming effects (e.g., Dijkstra & van Heuven, 2002;van Hell & Tanner, 2012). However, since spoken and sign languages have no shared phonological system, evidence for co-activation in bimodal bilinguals has been taken to suggest an important role for connections between lexical phonological and orthographical representations in the two languages and/or connections through shared semantic representations (Morford et al., 2017;Ormel, 2008;Shook & Marian, 2012). ...
... The aim of the present study is twofold: 1) to provide insight into cross-language activation in hearing bimodal bilinguals by examining the co-activation of spoken words during processing of signs by hearing bimodal bilingual users of NGT (late learners) and Dutch (their L1), adding to recent findings in the literature of co-activation of signs during processing of written and spoken words, and 2) to elucidate the contribution of MOUTHINGS to the co-activation of spoken language during processing of sign language. Given that spoken and signed languages have no clear phonological connections (shared phonemes or graphemes) through which co-activation can occur, cross-language activation in bimodal bilinguals may occur through lexical connections between the two languages and/or through shared semantic representations (Morford et al., 2017;Ormel, 2008;Shook & Marian, 2013). Alternatively, mouthings that share linguistic features with the spoken language might provide modality-specific connections between the spoken and signed languages that mediate language co-activation between these languages. ...
Article
Full-text available
The present study provides insight into cross-language activation in hearing bimodal bilinguals by (1) examining co-activation of spoken words during processing of signs by hearing bimodal bilingual users of Dutch (their L1) and Sign Language of the Netherlands (NGT; late learners) and (2) investigating the contribution of mouthings to bimodal cross-language activation. NGT signs were presented with or without mouthings in two sign-picture verification experiments. In both experiments the phonological relation (unrelated, cohort overlap or final rhyme overlap) between the Dutch translation equivalents of the NGT signs and pictures was manipulated. Across both experiments, the results showed slower responses for sign-picture pairs with final rhyme overlap relative to phonologically unrelated sign-picture pairs, indicating co-activation of the spoken language during sign processing, but no significant effect for sign-picture pairs with cohort overlap in Dutch. In addition, co-activation was not affected by the presence or absence of mouthings.
... Signed and spoken languages have very little articulatory or perceptual overlap, and signed languages have no standardized, widely-used written systems. Deaf and hearing bilingual adults nevertheless activate signs when reading printed words or listening to spoken words (e.g., written English-ASL: Meade, Midgley, Sehyr Sevcikova, Holcomb, & Emmorey, 2017;Morford, Wilkinson, Villwock, Piñar, & Kroll, 2011;Morford, Kroll, Piñar, & Wilkinson, 2014;Morford, Occhino, Piñar, Wilkinson, & Kroll, 2017;Morford et al., 2019;Quandt & Kubicek, 2018 Mendoza & Jackson Maldonado, 2020; spoken English-ASL: Giezen, Blumenfeld, Shook, Marian, & Emmorey, 2015;Shook & Marian, 2012;spoken Spanish-LSE: Villameriel, Dias, Costello, & Carreiras, 2016). For example, Morford et al. (2011) asked highly proficient deaf ASL-English bilinguals and non-signing hearing controls to decide if English word pairs were semantically related. ...
... These results indicate that deaf bilinguals activate the sign translations of written words, even in an English monolingual task. This finding was replicated in several follow-up studies with deaf and hearing signing bilinguals (Kubus et al., 2015;Meade et al., 2017;Mendoza & Jackson Maldonado, 2020;Morford et al., 2014Morford et al., , 2017Villameriel et al., 2016). ...
... Morford et al., 2019). Contrary to previous studies on cross-language activation in deaf bilingual adults using the same experimental task (Kubus et al., 2015;Meade et al., 2017;Morford et al., 2011;Morford et al., 2014Morford et al., , 2017 we did not find a statistically significant inhibition effectdeaf children did not respond significantly slower to semantically unrelated English word pairs with phonologically related ASL translations such as movie and cheese, than to those with phonologically unrelated translations such as paper and goose. One explanation could be that activation between phonologically similar but semantically unrelated signs spreads more slowly than between signs that are similar in both phonology and semantics. ...
Article
Bilinguals, both hearing and deaf, activate multiple languages simultaneously even in contexts that require only one language. To date, the point in development at which bilingual signers experience cross-language activation of a signed and a spoken language remains unknown. We investigated the processing of written words by ASL-English bilingual deaf middle school students. Deaf bilinguals were faster to respond to English word pairs with phonologically related translations in ASL than to English word pairs with unrelated translations, but no difference was found for hearing controls with no knowledge of ASL. The results indicate that co-activation of signs and written words is not the outcome of years of bilingual experience, but instead characterizes bilingual language development.
... This result seems consistent with the findings by Ormel et al. (2012) that deaf children are sensitive to phonological information in sign translation equivalents of Dutch words during visual word recognition. Similar results have been found in deaf teenagers (Villwock et al. 2021) and deaf adult readers (Morford et al. 2011(Morford et al. , 2017. This result is also in line with other recent studies showing positive correlations between sign language knowledge and reading abilities (Scott and Hoffmeister 2016;Crume et al. 2021;Keck and Wolgemuth 2020;Holmer et al. 2016). ...
... One possible explanation is that word reading latency is also-or even more so-influenced by other factors, for exam-ple, factors related to visual and/or orthographic processing. Several studies have found similar or even faster word reading latencies for deaf readers compared to hearing readers, sometimes in combination with lower accuracy levels (e.g., Fariña et al. 2017;Hanson and Fowler 1987;Morford et al. 2017). Similarly, recent studies showed stronger identity priming effects in adult deaf vs. hearing readers (Gutierrez-Sigut et al. 2017;Gutierrez-Sigut et al. 2018), so-called 'hyperpriming'. ...
Article
Full-text available
Reading continues to be a challenging task for most deaf children. Bimodal bilingual education creates a supportive environment that stimulates deaf children’s learning through the use of sign language. However, it is still unclear how exposure to sign language might contribute to improving reading ability. Here, we investigate the relative contribution of several cognitive and linguistic variables to the development of word and text reading fluency in deaf children in bimodal bilingual education programmes. The participants of this study were 62 school-aged (8 to 10 years old at the start of the 3-year study) deaf children who took part in bilingual education (using Dutch and Sign Language of The Netherlands) and 40 age-matched hearing children. We assessed vocabulary knowledge in speech and sign, phonological awareness in speech and sign, receptive fingerspelling ability, and short-term memory at time 1 (T1). At times 2 (T2) and 3 (T3), we assessed word and text reading fluency. We found that (1) speech-based vocabulary strongly predicted word and text reading at T2 and T3, (2) fingerspelling ability was a strong predictor of word and text reading fluency at T2 and T3, (3) speech-based phonological awareness predicted word reading accuracy at T2 and T3 but did not predict text reading fluency, and (4) fingerspelling and STM predicted word reading latency at T2 while sign-based phonological awareness predicted this outcome measure at T3. These results suggest that fingerspelling may have an important function in facilitating the construction of orthographical/phonological representations of printed words for deaf children and strengthening word decoding and recognition abilities. Keywords: deafness; reading development; bimodal bilingual education; word reading; text reading; sign language; phonological awareness; vocabulary; fingerspelling
... This result seems consistent with the findings by Ormel et al. (2012) that deaf children are sensitive to phonological information in sign translation equivalents of Dutch words during visual word recognition. Similar results have been found in deaf teenagers (Villwock et al. 2021) and deaf adult readers (Morford et al. 2011(Morford et al. , 2017. This result is also in line with other recent studies showing positive correlations between sign language knowledge and reading abilities (Scott and Hoffmeister 2016;Crume et al. 2021;Keck and Wolgemuth 2020;Holmer et al. 2016). ...
... One possible explanation is that word reading latency is also-or even more so-influenced by other factors, for exam-ple, factors related to visual and/or orthographic processing. Several studies have found similar or even faster word reading latencies for deaf readers compared to hearing readers, sometimes in combination with lower accuracy levels (e.g., Fariña et al. 2017;Hanson and Fowler 1987;Morford et al. 2017). Similarly, recent studies showed stronger identity priming effects in adult deaf vs. hearing readers (Gutierrez-Sigut et al. 2017;Gutierrez-Sigut et al. 2018), so-called 'hyperpriming'. ...
Article
Full-text available
Reading continues to be a challenging task for most deaf children. Bimodal bilingual education creates a supportive environment that stimulates deaf children’s learning through the use of sign language. However, it is still unclear how exposure to sign language might contribute to improving reading ability. Here, we investigate the relative contribution of several cognitive and linguistic variables to the development of word and text reading fluency in deaf children in bimodal bilingual education programmes. The participants of this study were 62 school-aged (8 to 10 years old at the start of the 3-year study) deaf children who took part in bilingual education (using Dutch and Sign Language of The Netherlands) and 40 age-matched hearing children. We assessed vocabulary knowledge in speech and sign, phonological awareness in speech and sign, receptive fingerspelling ability, and short-term memory at time 1 (T1). At times 2 (T2) and 3 (T3), we assessed word and text reading fluency. We found that (1) speech-based vocabulary strongly predicted word and text reading at T2 and T3, (2) fingerspelling ability was a strong predictor of word and text reading fluency at T2 and T3, (3) speech-based phonological awareness predicted word reading accuracy at T2 and T3 but did not predict text reading fluency, and (4) fingerspelling and STM predicted word reading latency at T2 while sign-based phonological awareness predicted this outcome measure at T3. These results suggest that fingerspelling may have an important function in facilitating the construction of orthographical/phonological representations of printed words for deaf children and strengthening word decoding and recognition abilities.
... Secondly, in the lexical decision task, deaf readers responded significantly faster than hearing readers across all conditions. Previous studies have reported faster reaction times in word reading tasks for deaf compared to hearing readers 20,50,51 . Deaf individuals show enhanced reactivity in simple visual detection tasks 52 and this may explain part of this advantage. ...
... Deaf individuals show enhanced reactivity in simple visual detection tasks 52 and this may explain part of this advantage. An additional contributing factor for the discrepancy between deaf and hearing readers in word detection tasks may be precisely that phonological competitors are not activated for deaf readers, making word recognition faster 51,53 . This proposal fits with the general trend for faster, more proficient reading to rely less on phonological encoding and more on orthographic chunking 13,54 . ...
Article
Full-text available
Reading typically involves phonological mediation, especially for transparent orthographies with a regular letter to sound correspondence. In this study we ask whether phonological coding is a necessary part of the reading process by examining prelingually deaf individuals who are skilled readers of Spanish. We conducted two EEG experiments exploiting the pseudohomophone effect, in which nonwords that sound like words elicit phonological encoding during reading. The first, a semantic categorization task with masked priming, resulted in modulation of the N250 by pseudohomophone primes in hearing but not in deaf readers. The second, a lexical decision task, confirmed the pattern: hearing readers had increased errors and an attenuated N400 response for pseudohomophones compared to control pseudowords, whereas deaf readers did not treat pseudohomophones any differently from pseudowords, either behaviourally or in the ERP response. These results offer converging evidence that skilled deaf readers do not rely on phonological coding during visual word recognition. Furthermore, the finding demonstrates that reading can take place in the absence of phonological activation, and we speculate about the alternative mechanisms that allow these deaf individuals to read competently.
... It has become well-known that Deaf readers activate the ASL translations of written words (Morford et al., 2011). This occurs for both middle-school (Villwock et al., 2021) and adult readers (Morford et al., 2017). As Morford et al. (2011) note, mentally activating an additional language while reading a different language is a common occurrence for bi-and multi-lingual persons. ...
Article
Full-text available
How Deaf children should be taught to read has long been debated. Severely or profoundly Deaf children, who face challenges in acquiring language from its spoken forms, must learn to read a language they do not speak. We refer to this as learning a language via print. How children can learn language via print is not a topic regularly studied by educators, psychologists, or language acquisition theorists. Nonetheless, Deaf children can do this. We discuss how Deaf children can learn a written language via print by mapping print words and phrases to sign language sequences. However, established, time-tested curricula for using a signed language to teach the print forms of spoken languages do not exist. We describe general principles for approaching this task, how it differs from acquiring a spoken language naturalistically, and empirical evidence that Deaf children's knowledge of a signed language facilitates and advances learning a printed language.
... These studies provide ample evidence that lexical access is non-selective for signing bilinguals, but they do not distinguish between activation spreading between the languages via lexical links, semantic links, or via top-down controlled processes. In order to rule out the possibility that participants were strategically translating English stimulus words into ASL after accessing the meaning of the English words, Morford, Occhino-Kehoe, Piñar, Wilkinson, & Kroll (2017) replicated the implicit priming study of Morford et al. (2011), but shortened the time course of the experiment to just 300 ms stimulus onset asynchrony. Their replication of cross-language implicit priming despite the much shorter time course would be consistent with the argument that activation is not the result of conscious translation. ...
Chapter
In the last two decades there has been an upsurge of research on the cognitive and neural basis of bilingualism. The initial discovery that the bilingual’s two languages are active regardless of the intention to use one language alone, now replicated in hundreds of studies, has shaped the research agenda. The subsequent research has investigated the consequences of parallel activation of the two languages and considered the circumstances that might constrain language nonselectivity. At the same time, there has been emerging recognition that not all bilinguals are the same. Bilingualism takes different forms across languages and across unique interactional contexts. Understanding variation in language experience becomes a means to identify those linguistic, cognitive, and neural consequences of bilingualism that are universal and those that are language and situation specific. From this perspective, individuals who sign one language and speak or read the other, become a critical source of information. The distinct features of sign, and the differences between sign and speech, become a tool that can be exploited to examine the mechanisms that enable dual language use and the consequences that bilingualism imposes on domain general cognition. In this chapter, we review the recent evidence on bilingualism for both deaf and hearing signers. Our review suggests that many of the same principles that characterize spoken bilingualism can be seen in bilinguals who sign one language and speak or read the other. That conclusion does not imply that deaf vs. hearing language users are identical or that languages in different modalities are the same. Instead, the evidence suggests that the co-activation of a bilingual’s two languages comes to shape the functional signatures of bilingualism in ways that are universal and profound.
... Critically, the non-selective nature of bilingual lexical activation has also been shown in bilinguals with two languages of different modality (oral and sign), termed "bimodal bilinguals". A number of experiments have showed that deaf and hearing bimodal bilinguals activate sign properties when processing words (Kubus, Villwock, Morford & Rathmann, 2015;Morford, Kroll, Piñar & Wilkinson, 2014;Morford, Occhino-Kehoe, Piñar, Wilkinson & Kroll, 2017;Morford, Wilkinson, Villwock, Piñar & Kroll, 2011;Shook & Marian, 2012;Villameriel, Dias, Costello & Carreiras, 2016). For example, Morford et al. (2011) showed that phonological relationships in American Sign Language 1 (ASL) influenced semantic similarity judgements of written word pairs in English (see also Villameriel et al., 2016, for similar results with hearing bimodal bilinguals and Morford et al., 2014, for a different result with hearing bimodal bilinguals). ...
Article
Full-text available
To investigate cross-linguistic interactions in bimodal bilingual production, behavioural and electrophysiological measures (ERPs) were recorded from 24 deaf bimodal bilinguals while naming pictures in Catalan Sign Language (LSC). Two tasks were employed, a picture-word interference and a picture-picture interference task. Cross-linguistic effects were explored via distractors that were either semantically related to the target picture, to the phon-ology/orthography of the Spanish name of the target picture, or were unrelated. No semantic effects were observed in sign latencies, but ERPs differed between semantically related and unrelated distractors. For the form-related manipulation, a facilitation effect was observed both behaviourally and at the ERP level. Importantly, these effects were not influenced by the type of distractor (word/picture) presented providing the first piece of evidence that deaf bimodal bilinguals are sensitive to oral language in sign production. Implications for models of cross-linguistic interactions in bimodal bilinguals are discussed.
... Villwock,Morford, & Rathmann, 2015;Hosemann, Mani, Herrmann, Steinbach, & Altvater-Mackensen, 2020;Meade et al., 2017;Morford et al., 2011;Morford, Kroll, Piñar, & Wilkinson, 2014;Morford, Occhino, Piñar, Wilkinson, & Kroll, 2017;Morford, Occhino, Zirnstein, Kroll, Wilkinson, & Piñar, 2019; Quandt & Kubicek, 2018) and hearing(Giezen et al., 2015;Marian & Spivey, 2003;Morford et al., 2014;Shook & Marian, 2012) bilingual signers. ...
Article
Full-text available
The well-known Stroop interference effect has been instrumental in revealing the highly automated nature of lexical processing as well as providing new insights to the underlying lexical organization of first and second languages within proficient bilinguals. The present cross-linguistic study had two goals: 1) to examine Stroop interference for dynamic signs and printed words in deaf ASL-English bilinguals who report no reliance on speech or audiological aids; 2) to compare Stroop interference effects in several groups of bilinguals whose two languages range from very distinct to very similar in their shared orthographic patterns: ASL-English bilinguals (very distinct), Chinese-English bilinguals (low similarity), Korean-English bilinguals (moderate similarity), and Spanish-English bilinguals (high similarity). Reaction time and accuracy were measured for the Stroop color naming and word reading tasks, for congruent and incongruent color font conditions. Results confirmed strong Stroop interference for both dynamic ASL stimuli and English printed words in deaf bilinguals, with stronger Stroop interference effects in ASL for deaf bilinguals who scored higher in a direct assessment of ASL proficiency. Comparison of the four groups of bilinguals revealed that the same-script bilinguals (Spanish-English bilinguals) exhibited significantly greater Stroop interference effects for color naming than the other three bilingual groups. The results support three conclusions. First, Stroop interference effects are found for both signed and spoken languages. Second, contrary to some claims in the literature about deaf signers who do not use speech being poor readers, deaf bilinguals' lexical processing of both signs and written words is highly automated. Third, cross-language similarity is a critical factor shaping bilinguals' experience of Stroop interference in their two languages. This study represents the first comparison of both deaf and hearing bilinguals on the Stroop task, offering a critical test of theories about bilingual lexical access and cognitive control.
... Experimental methods are crucial for studying language processing. For example, there is now abundant evidence that both languages remain activated during bilingual language processing (Kroll, Bobb, & Hoshino 2014, Morford et al. 2017. Eyetracking studies have shown that bilinguals' experiences with code-switching influences how they process language (Valdés Kroff et al. 2017). ...
Article
Naomi L. Shin is an Associate Professor of Linguistics and Hispanic Linguistics at the University of New Mexico. Her primary interests include child language acquisition, bilingualism, language contact, and sociolinguistics. Her research focuses on patterns of morphosyntactic variation, examining how these patterns are acquired during childhood and how they change in situations of language contact. Her articles have appeared in journals such as Journal of Child Language, Cognitive Linguistics, International Journal of Bilingualism, Language Acquisition, Language Variation and Change, Language in Society, Foreign Language Annals, Spanish in Context, Studies in Hispanic and Lusophone Linguistics, and International Journal of the Sociology of Language. She is the co-author of Gramática Española: Variación Social, which explores grammar in a way that emphasizes the social underpinnings of language.Website: http://www.unm.edu/~naomishin/index.html
Article
Full-text available
In this study, the authors show that cross-lingual phonological priming is possible not only from the 1st language (L1) to the 2nd language (L2), but also from L2 to L1. In addition, both priming effects were found to have the same magnitude and to not be related to differences in word naming latencies between L1 and L2. The findings are further evidence against language-selective access models of bilingual word processing and are more in line with strong phonological models of visual word recognition than with the traditional dual-route models.
Article
Recent evidence demonstrates that American Sign Language (ASL) signs are active during print word recognition in deaf bilinguals who are highly proficient in both ASL and English. In the present study, we investigate whether signs are active during print word recognition in two groups of unbalanced bilinguals: deaf ASL-dominant and hearing English-dominant bilinguals. Participants judged the semantic relatedness of word pairs in English. Critically, a subset of both the semantically related and unrelated English word pairs had phonologically related translations in ASL, but participants were never shown any ASL signs during the experiment. Deaf ASLdominant bilinguals (Experiment 1) were faster when semantically related English word pairs had similar form translations in ASL, but slower when semantically unrelated words had similar form translations in ASL, indicating that ASL signs are engaged during English print word recognition in these ASL-dominant signers. Hearing English-dominant bilinguals (Experiment 2) were also slower to respond to semantically unrelated English word pairs with similar form translations in ASL, but no facilitation effects were observed in this population. The results provide evidence that the interactive nature of lexical processing in bilinguals is impervious to language modality.
Article
This study addressed visualword recognition in deaf bilinguals who are proficient inGerman Sign Language (DGS) and German. The study specifically investigated whether DGS signs are activated during a monolingual German word recognition task despite the lack of similarity in German orthographic representations and DGS phonological representations. Deaf DGS–German bilinguals saw pairs of German words and decided whether the words were semantically related. Half of the experimental items had phonologically related translation equivalents in DGS. Participants were slower to reject semantically unrelated word pairs when the translation equivalents were phonologically related in DGS than when the DGS translations were phonologically unrelated. However, this was not the case in Turkish–German hearing bilinguals who do not have sign language knowledge. The results indicate that lexical representations are associated cross-linguistically in the bilingual lexicon irrespective of their orthographic or phonological form. Implications of these results for reading development in deaf German bilinguals are discussed.
Article
The majority of words in the English language do not correspond to a single meaning, but rather correspond to two or more unrelated meanings (i.e., are homonyms) or multiple related senses (i.e., are polysemes). It has been proposed that the different types of "semantically-ambiguous words" (i.e., words with more than one meaning) are processed and represented differently in the human mind. Several review papers and books have been written on the subject of semantic ambiguity (e.g., Adriaens, Small, Cottrell, & Tanenhaus, 1988; Burgess & Simpson, 1988; Degani & Tokowicz, 2010; Gorfein, 1989, 2001; Simpson, 1984). However, several more recent studies (e.g., Klein & Murphy, 2001; Klepousniotou, 2002; Klepousniotou & Baum, 2007; Rodd, Gaskell, & Marslen-Wilson, 2002) have investigated the role of the semantic similarity between the multiple meanings of ambiguous words on processing and representation, whereas this was not the emphasis of previous reviews of the literature. In this review, we focus on the current state of the semantic ambiguity literature that examines how different types of ambiguous words influence processing and representation. We analyze the consistent and inconsistent findings reported in the literature and how factors such as semantic similarity, meaning/sense frequency, task, timing, and modality affect ambiguous word processing. We discuss the findings with respect to recent parallel distributed processing (PDP) models of ambiguity processing (Armstrong & Plaut, 2008, 2011; Rodd, Gaskell, & Marslen-Wilson, 2004). Finally, we discuss how experience/instance-based models (e.g., Hintzman, 1986; Reichle & Perfetti, 2003) can inform a comprehensive understanding of semantic ambiguity resolution.
Article
During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension.
Article
Language non-selective lexical access in bilinguals has been established mainly using tasks requiring explicit language processing. Here, we show that bilinguals activate native language translations even when words presented in their second language are incidentally processed in a nonverbal, visual search task. Chinese-English bilinguals searched for strings of circles or squares presented together with three English words (i.e., distracters) within a 4-item grid. In the experimental trials, all four locations were occupied by English words, including a critical word that phonologically overlapped with the Chinese word for circle or square when translated into Chinese. The eye-tracking results show that, in the experimental trials, bilinguals looked more frequently and longer at critical than control words, a pattern that was absent in English monolingual controls. We conclude that incidental word processing activates lexical representations of both languages of bilinguals, even when the task does not require explicit language processing.