Article

Covert Co-Activation of Bilinguals' Non-Target Language: Phonological Competition from Translations

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

When listening to spoken language, bilinguals access words in both of their languages at the same time; this co-activation is often driven by phonological input mapping to candidates in multiple languages during online comprehension. Here, we examined whether cross-linguistic activation could occur covertly when the input does not overtly cue words in the non-target language. When asked in English to click an image of a duck, English-Spanish bilinguals looked more to an image of a shovel than to unrelated distractors, because the Spanish translations of the words duck and shovel (pato and pala, respectively) overlap phonologically in the non-target language. Our results suggest that bilinguals access their unused language, even in the absence of phonologically overlapping input. We conclude that during bilingual speech comprehension, words presented in a single language activate translation equivalents, with further spreading activation to unheard phonological competitors. These findings support highly interactive theories of language processing.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... When a multilingual speaker hears a sentence in their non-native language, the process of word recognition is so fast that it may seem unthinkable that, to be selected, a word has to compete against other lexical items within its own and across other languages the speaker knows (Shook & Marian, 2019). The more similar the words sound, the more competition there is (Lagrou et al., 2013b), which results in the brain taking longer to select or process and integrate such words. ...
... Possibly one of the most important findings on trilingual word recognition is that, similarly to bilinguals, their lexical access is also nonselective (Lemhöfer et al., 2004;Bartolotti & Marian, 2018). As mentioned above, cross-linguistic co-activation can be accumulated across three languages (Lemhöfer et al., 2004;Szubko-Sitarek, 2011) and, similarly to bilinguals, it can be driven by covert co-activation of lexical items, occurring due to the translation equivalents' form similarity, without any direct form overlap with the input (Bartolotti & Marian, 2018;Shook & Marian, 2019). ...
... Finally, the bi-directionally and laterally spread co-activation principle allows BLINCS to explain covert competition from items which do not directly match input in their form (Shook & Marian, 2019). There are two ways of co-activation: it can be triggered by the semantic level feedback (e.g., hearing "duck" activates the word bird), and the initial activation from the input can spread laterally at the lexicon level from the co-activation of translations, i.e., hearing "duck" activates the English orthographic and phonological forms and the Spanish translation pato (duck), which, in its turn, co-activates other Spanish and English cohorts, e.g., pala (shovel) (Shook & Marian, 2019). ...
Thesis
Persistent non-target language co-activation in spoken and visual language comprehension has been found both at the word-level and at the level of a sentence, although in the latter case, sentence bias has been observed to modulate the co-activation which can create lexical competition. In the case of trilingual speakers, both non-target languages may potentially compete with the third language (L3). The current study aimed to investigate how cross-linguistic (or interlingual) competition across three languages is modulated by sentence bias while listening to the L3. Of particular interest was whether top-down sentential information would modulate not only single but also double bottom-up driven cross-linguistic competition. A picture-word recognition task was given to 44 L1 Russian L2 English late L3 Swedish learners, listening to Swedish sentences online while their reaction times and accuracy were collected. The results revealed shorter processing times and higher accuracy for high- compared to low-constraint sentences and overall lower accuracy (and slower reactions in high-constraint sentences) when an L1 Russian competitor’s translation phonological onset overlapped with a Swedish target word. The findings suggest that when trilinguals were processing their L3 speech, top-down information from the sentential context did not modulate the bottom-up guided L1 phonological competition. However, the effect of an L2 English L3 Swedish cognate competitor was not significant. This pattern of results is in line with BLINCS (Shook & Marian, 2013), which assumes gradual co-activation decay (i.e., a strong cross-linguistic competition effect might be observed in the end-course reaction times) and a direct visual information influence on linguistic processing. It is, however, inconsistent with the BIA+ model (Dijkstra and Van Heuven, 2002), which predicts that a high-constraint sentence context can modulate cross-linguistic competition, particularly, at later processing stages. The full text is available at https://urn.kb.se/resolve?urn=urn:nbn:se:su:diva-217738
... Theoretical models of language co-activation are supported by empirical evidence from unimodal tasks with bilinguals in the auditory (Fitzpatrick & Indefrey, 2010;Weber & Cutler, 2006), visual (Chabal & Marian, 2015;Finkbeiner et al., 2004;Martín et al., 2010;Schoonbaert et al., 2009;Sunderman & Kroll, 2006;Thierry & Wu, 2007), and audio-visual modalities (Blumenfeld & Marian, 2013;Giezen et al., 2015;Ju & Luce, 2004;Marian & Spivey, 2003a, b;Shook & Marian, 2019), suggesting that phonological overlap between words across languages leads to parallel activation (e.g., Marian & Spivey, 2003a, b;Shook & Marian, 2013). ...
... p < 0.01). In addition, previous research has demonstrated that cross-linguistic competition occurs early on within the first 600ms post-target onset (Blumenfeld & Marian, 2013;Shook & Marian, 2019). We thus used a narrower time window (300-600ms postsound onset) for follow-up analyses to confirm initial results based on visual inspection of the data. ...
... Visual inspection of the time course data suggested that monolinguals produced more looks to competitors and fillers overall early on. To further uncover whether this trend held for monolinguals and bilinguals in a narrower time window when cross-linguistic effects are typically present (Blumenfeld & Marian, 2013;Shook & Marian, 2019), we selected 300-600ms post-sound onset. These follow-up analyses revealed no main effects or interactions within the narrower time window. ...
Article
A bilingual’s language system is highly interactive. When hearing a second language (L2), bilinguals access native-language (L1) words that share sounds across languages. In the present study, we examine whether input modality and L2 proficiency moderate the extent to which bilinguals activate L1 phonotactic constraints (i.e., rules for combining speech sounds) during L2 processing. Eye movements of English monolinguals and Spanish–English bilinguals were tracked as they searched for a target English word in a visual display. On critical trials, displays included a target that conflicted with the Spanish vowel-onset rule (e.g., sp a ), as well as a competitor containing the potentially activated “e” onset (e.g., e gg ). The rule violation was processed either in the visual modality (Experiment 1) or audio-visually (Experiment 2). In both experiments, bilinguals with lower L2 proficiency made more eye movements to competitors than fillers. Findings suggest that bilinguals who have lower L2 proficiency access L1 phonotactic constraints during L2 visual word processing with and without auditory input of the constraint-conflicting structure (e.g., spa ). We conclude that the interactivity between a bilingual’s two languages is not limited to words that share form across languages, but also extends to sublexical, rule-based structures.
... Fixations were again analysed according to interest areas and distractors were averaged. As there is evidence for an effect of phonological competition of translation equivalents in terms of visual attention [83], three time windows were inspected for phonological competition during production. The first time window started 600 ms post target-word onset, under the assumption that L2 lexical access and ensuing L1 retrieval take 200 ms each and by adding 200 ms for planning and executing an eye movement based on the retrieval of the L1 word form of the target. ...
... It may be important to consider that the comprehension task required the processing of an audio input followed by a motor response (selecting the image corresponding to the audio input, moving the mouse cursor and clicking on that image), while the production task required the processing of an L2 audio input, the translation of the whole input (interpreter variant) or the sentence-final target word (non-interpreter variant) into the participants' L1 as well as the articulation of the translation. This difference in the nature of the experimental tasks-the comprehension task requiring two motor responses (eye and hand movements) vs. production requiring a motor and a verbal response (eye movements and articulation of the translated sentence or target-object name), but also in comparison with the production studies, which previously found evidence for co-activation during production, but that did not comprise a language-transfer element [59,[67][68][69]83]. As one reviewer helpfully suggested, it is possible that the comprehension task allowed for an easier association of responses of the same type than the second task that associated distinct response types. ...
Article
Full-text available
This study examines the phonological co-activation of a task-irrelevant language variety in mono- and bivarietal speakers of German with and without simultaneous interpreting (SI) experience during German comprehension and production. Assuming that language varieties in bivarietal speakers are co-activated analogously to the co-activation observed in bilinguals, the hypothesis was tested in the Visual World paradigm. Bivarietalism and SI experience were expected to affect co-activation, as bivarietalism requires communication-context based language-variety selection, while SI hinges on concurrent comprehension and production in two languages; task type was not expected to affect co-activation as previous evidence suggests the phenomenon occurs during comprehension and production. Sixty-four native speakers of German participated in an eye-tracking study and completed a comprehension and a production task. Half of the participants were trained interpreters and half of each sub-group were also speakers of Swiss German (i.e., bivarietal speakers). For comprehension, a growth-curve analysis of fixation proportions on phonological competitors revealed cross-variety co-activation, corroborating the hypothesis that co-activation in bivarietals’ minds bears similar traits to language co-activation in multilingual minds. Conversely, co-activation differences were not attributable to SI experience, but rather to differences in language-variety use. Contrary to expectations, no evidence for phonological competition was found for either same- nor cross-variety competitors in either production task (interpreting- and word-naming variety). While phonological co-activation during production cannot be excluded based on our data, exploring the effects of additional demands involved in a production task hinging on a language-transfer component (oral translation from English to Standard German) merit further exploration in the light of a more nuanced understanding of the complexity of the SI task.
... For instance, when instructed to pick up the stamp in Russian (marku), they would also look at the marker pen because the two words shared the same initial syllable, even when English was not explicitly spoken in the task. Similarly, Shook and Marian (2019) found that the English-Spanish speakers looked at the image of a shovel (pato in Spanish) more than the unrelated distractors when asked to click on a "duck" (pala in Spanish) in English. ...
... Using eye-tracking method, Spivey and Marian (1999) found that in Russian-English bilinguals, the irrelevant language was activated during an auditory task. The parallel activation of languages was later replicated in the Spanish-English bilinguals (Shook & Marian, 2019) and the Hindi-English (Mishra & Singh, 2014) bilinguals. Because the two languages are always activated together, a bilingual would have to inhibit the unwanted one in order to speak in the language relevant to the conversation. ...
Thesis
Full-text available
Full text available here: https://theses.lib.polyu.edu.hk/handle/200/12383 Bilingualism has been attracting interest from the cognitive science field for years as it is suggested to be a protective factor against cognitive decline in ageing. It is often reported that bilinguals performed better than monolinguals in inhibitory control tasks. The mechanism behind the better inhibitory control was that bilinguals would have to suppress the interference from the unwanted language all the time, and such linguistic control is thought to be, at least partially, overlapped with the general inhibitory control network. However, inconsistent results have been reported. It is common for the literature to compare monolinguals with bilinguals as two homogenous groups without considering the individual variations between and among them. Moreover, as the Adaptive Control Hypothesis (Green & Abutalebi, 2013) suggested, the interaction context affects the cognitive demand in controlling the languages. Three experiments were designed to explore how different aspects of bilingualism contribute to cognition and the bilingual advantage effect. The first experiment recruited older adults to complete a comprehensive set of cognitive tests together with questionnaires on their language and demographic profiles. Comparing the monolinguals and bilinguals, we found the classic bilingual advantage effect: bilinguals scored higher in the Montreal Cognitive Assessment (MoCA), indicating better cognitive status. Moreover, within the bilinguals, the scores in the cognitive battery were predicted with demographic and linguistic variables using linear regression analysis. We found that L2 proficiency predicts better inhibitory control and verbal ability performance in lifelong bilinguals. We propose that, because our participants are L1-dominant speakers, only the sufficiently proficient L2 would provide enough interference in the practice of linguistic inhibition control. The second experiment investigated the cognitive changes in older foreign language learners. Older adults were recruited to study in an elementary English course for six weeks, with cognitive tests taken before and after the course. Although the statistical results between the intervention group and the active and passive control groups were not significant, the language learning-induced differences were observed in some tasks, including the accuracy of Picture Naming and the Conflicting Effect in the Attention Network Task. Correlation analysis suggested that successful language learners showed an improvement in inhibitory control and a decline in verbal fluency. The third experiment investigated the organisation of the mental lexicon through an interesting language phenomenon in Hong Kong: dense code-switching. Whereas the literature often suggested that the comprehension of code-switching requires a switch in lexicon and is therefore challenging, we found that switching lexicon was needed only in the case of non-habitual word usage, regardless of whether it was unilingual and code-switching. From the result of this experiment, we proposed that the language input from the community had formed the bilingual prefabs, which integrated into the dominantly Cantonese lexicon. Collectively, we suggest that the environment, language and cognition form a looping circle in that each component is interrelated. Moreover, they each affect the organisation of the bilingual mental lexicon and the retrieval of concepts from the lexicon. In view of that, we propose the Experience-based Bilingual Mental Lexicon Model, which is modified based on the Revised Hierarchical Model (Kroll & Stewart, 1994). Two critical assumptions are incorporated into the existing model: (1) the language lexicon is organised by experience but not by language origin, and (2) language dominance is dynamic. We believe the proposed model could better capture the dynamic change of language by experience. It could explain how individual differences contribute to the bilingual advantage effect. References: Green, D. W., & Abutalebi, J. (2013). Language control in bilinguals: The adaptive control hypothesis. Journal of Cognitive Psychology, 25(5), 515-530. https://doi.org/10.1080/20445911.2013.796377 Kroll, J. F., & Stewart, E. (1994). Category interference in translation and picture naming: Evidence for asymmetric connections between bilingual memory representations. Journal of Memory and Language, 33(2), 149-174. https://doi.org/10.1006/jmla.1994.1008
... Following Ito et al. (2018) and Shook and Marian (2019), among others (see Huang & Snedeker, 2020 for discussion of alternative approaches), we then performed a growth curve analysis (GCA) on the eye movement data in R. We ran a mixed-effects model using the lme4 package (version 1.1.21; Bates et al., 2015); p values were obtained via the lmerTest package (version 3.1; Kuznetsova et al., 2017). ...
... Specifically, at low levels of proficiency, the L2 is lexically mediated via the L1, with increases in proficiency strengthening the direct links between the L2 and the conceptual store. 2. This sample size is in line with that of other studies using the same paradigm (Ito et al., 2018;Shook & Marian, 2019). ...
Article
Full-text available
Aims The study investigates the effects of L2 proficiency and L2 exposure on L2-to-L1 cross-language activation (CLA) in L1-dominant bilinguals. In so doing, it tests the predictions made by prominent models of the bilingual lexicon regarding how language experience modulates CLA. Design The participants (27 L1-dominant L1 English–L2 Afrikaans speakers) completed a visual world eye-tracking task, conducted entirely in English, in which they saw four objects on a screen: a target object, which they were instructed to click on; a competitor object, whose Afrikaans label overlapped phonetically at onset with the English target object label; and two unrelated distractors. Language background data were collected using the Language History Questionnaire 3.0. Analysis A growth curve analysis was performed to investigate the extent to which the background variables modulated looks to the Afrikaans competitor item versus to the two unrelated distractor items. Findings Increased L2 exposure was associated with greater CLA, which is consistent with models suggesting that exposure modulates the likelihood and speed with which a linguistic item becomes activated. Moreover, CLA was reduced at higher levels of L2 proficiency, which aligns with accounts of the bilingual lexicon positing that parasitism of the L2 on the L1 is reduced at higher proficiency levels, leading to reduced CLA. Originality L2 activation during L1 processing and the variables that modulate it are not well documented, particularly among L1 speakers with limited proficiency in and exposure to the L2. Significance The findings contribute to the evaluation of competing accounts of bilingual lexical organization.
... In many of these studies, non-selective access to words in both languages is driven by phonological ambiguity in the input (10)(11)(12)(13), that is, words from different languages that sound alike. Additionally, there is evidence for cross-language co-activation between two spoken languages in the absence of overt phonological overlap ('phonologically covert co-activation') (14). In a visual world paradigm, English-Spanish bilinguals looked more to the image of a shovel than to unrelated distractors when asked to click on an image of a duck. ...
... The fact that the participants were all professional sign language interpreters (which was necessary to ensure that they were highly proficient in LSE) may have an impact on the organization of their mental lexicon and how the two languages interact. Nevertheless, co-activation of translation equivalents is widely reported in various bilingual populations (4,14,20) and falls more generally within semantic coactivation, which occurs in monolingual individuals. The sequential combination of semantic and phonological co-activation in STAR > 'estrella' > 'espada' has been reported for within-language contexts in the reverse direction: 'logs' > 'lock' > 'key' (29). ...
Article
We exploit the phenomenon of cross-modal, cross-language activation to examine the dynamics of language processing. Previous within-language work showed that seeing a sign coactivates phonologically related signs, just as hearing a spoken word coactivates phonologically related words. In this study, we conducted a series of eye-tracking experiments using the visual world paradigm to investigate the time course of cross-language coactivation in hearing bimodal bilinguals (Spanish–Spanish Sign Language) and unimodal bilinguals (Spanish/Basque). The aim was to gauge whether (and how) seeing a sign could coactivate words and, conversely, how hearing a word could coactivate signs and how such cross-language coactivation patterns differ from within-language coactivation. The results revealed cross-language, cross-modal activation in both directions. Furthermore, comparison with previous findings of within-language lexical coactivation for spoken and signed language showed how the impact of temporal structure changes in different modalities. Spoken word activation follows the temporal structure of that word only when the word itself is heard; for signs, the temporal structure of the sign does not govern the time course of lexical access (location coactivation precedes handshape coactivation)—even when the sign is seen. We provide evidence that, instead, this pattern of activation is motivated by how common in the lexicon the sublexical units of the signs are. These results reveal the interaction between the perceptual properties of the explicit signal and structural linguistic properties. Examining languages across modalities illustrates how this interaction impacts language processing.
... However, there is also Figure 11.1 Example of a visual world display with a between-language competitor Studying Bilingualism Through Eye-Tracking and Brain Imaging 275 evidence of coactivation even when there is no phonological overlap between the spoken word and the competitor in the nontarget language. One example comes from Shook and Marian (2017), who found evidence of "covert activation" using the eye-tracking and visual world paradigm. When English-Spanish bilinguals were asked to click on a picture of a duck in English, they were more likely to fixate on a picture of a shovel than on other competitors in the display. ...
... It may be more surprising, however, to find out that a lower proficiency language can influence how we process a higher proficiency language. Indeed, several studies utilizing different methodologies, including eye-tracking, have shown that a less proficient nontarget language can be activated in a monolingual dominant-language context (e.g., Lagrou et al., 2013;Lemhöfer et al., 2018;Shook & Marian, 2017). In other words, bilinguals can activate both dominant and nondominant languages even when they are not in use. ...
Chapter
Full-text available
Bilingual Lexical Ambiguity Resolution - edited by Roberto R. Heredia January 2020
... Moreover, the detection of cross-language activation of primary translations was based on implicit sound repetition. Similar findings were reported in an eyetracking study by Shook and Marian (2019) in which Spanish-English bilinguals heard words in their L2 English and were asked to click on the corresponding picture from a set of pictures. The results of their study showed longer and more frequent eye fixations on pictures whose translations had phonological similarity with the L2 target word's translation. ...
Article
Full-text available
Aims and Objectives: Translation ambiguous words are lexical items with one-to-many equivalents in another language. Some of these equivalents are more dominant (i.e., more frequently used) than others. The aim of the present study is to explore non-target language activation of translation ambiguous words among Chinese-English bilinguals. Methodology: The implicit priming paradigm was used in three experiments to explore: the activation of primary and secondary first language (L1) translations when bilinguals process second language (L2) translation ambiguous words (Experiment 1); the effect of L1 translation repetition on the processing of semantically related words in an L2 (Experiment 2); and whether such patterns are observed in the reverse direction, i.e., whether L2 primary translations are activated when processing L1 translation ambiguous words (Experiment 3). Data and Analysis: We use repeated measures ANOVAs to analyze the data. Findings/conclusions: Experiment 1 showed that when processing pairs of semantically unrelated L2 words, primary L1 translation equivalents are activated, but not secondary L1 translation equivalents. Experiment 2 found that when the target L2 words were semantically related, performance was facilitated when their translation equivalents were the same L1 word (i.e., implicitly repeated). Similarly, Experiment 3 showed that when processing L1 words, the L2 translation equivalents are automatically activated. Moreover, under semantically related conditions, implicit repetition of the non-target L2 translation facilitated L1 judgements, while under semantically unrelated conditions, L2 implicit repetition hampered L1 judgements. Originality: Most research on cross-language activation has examined L1 activation during L2 processing. However, few have investigated the reverse and findings from these few studies are inconsistent. Moreover, research on cross-language activation has mainly investigated the activation of primary translation equivalents, with very little focus on secondary translations. The present study uses the implicit priming paradigm to address these gaps in the literature. Significance: The findings support interactive theories of bilingual processing.
... For example, Russian-English bilinguals participating in a VWP session conducted completely in English fixated stamp [/marku/ in Russian] when a stamp was in the display when the instruction was 'click on the marker' (Spivey & Marian, 1999). Shook and Marian (2019) contrasted this "overt coactivation" (looking at a depicted item with a name similar in the 'background' language to a 'foreground' language phonological form) with a "covert coactivation" paradigm. When Spanish-English bilinguals were asked in English to click on a picture of a duck, they were significantly more likely to fixate shovel than an unrelated item, revealing phonological activation in Spanish (duck=pato and shovel=pala). ...
Article
Psycholinguists define spoken word recognition (SWR) as, roughly, the processes intervening between speech perception and sentence processing, whereby a sequence of speech elements is mapped to a phonological wordform. After reviewing points of consensus and contention in SWR, we turn to the focus of this review: considering the limitations of theoretical views that implicitly assume an idealized (neurotypical, monolingual adult) and static perceiver. In contrast to this assumption, we review evidence that SWR is plastic throughout the life span and changes as a function of cognitive and sensory changes, modulated by the language(s) someone knows. In highlighting instances of plasticity at multiple timescales, we are confronted with the question of whether these effects reflect changes in content or in processes, and we consider the possibility that the two are inseparable. We close with a brief discussion of the challenges that plasticity poses for developing comprehensive theories of spoken language processing. Expected final online publication date for the Annual Review of Linguistics, Volume 10 is January 2024. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.
... In bilinguals, two languages are simultaneously active during language production and comprehension. Thus, many studies have shown parallel coactivation of both languages in a diverse set of tasks (Bobb et al., 2020;McDonald & Kaushanskaya, 2020;Sadat et al., 2015;Shook & Marian, 2019). Classically, this co-activation has been studied by means of experimental tasks involving the processing of cognates (i.e., words in different languages that have similar forms and meanings, such as piano in Spanish and English) or homographs (i.e., words that are similar in form but differ in meaning, such as carpeta, which is folder in Spanish and not carpet). ...
Thesis
Full-text available
The current thesis endeavours to understand how the cognitive effects of knowing and managing two languages can affect an essential ability, namely, remembering future intentions. In researching it, we have observed the important role of the language in which these activities are performed, as well as the characteristics that define bilinguals. Overall, this work is an exciting starting point to guide my future research and my attempts to draw a broader picture of how bilingualism influences memory.
... The findings fit well with the theory of the BIAþ model, in which bottom-up parallel language activation and lexical competition between languages coexist. Moreover, in the investigation of specifying the nature of lexical processing (i.e., orthography and phonology) across modalities, a series of studies with the Visual World Paradigm (Berghoff et al., 2021;Marian et al., 2008;Shook & Marian, 2019;Veivo et al., 2016;Weber & Cutler, 2004) have revealed an impressive amount of interactivity between lexicons in a bottom-up manner for same-script bilinguals. Further converging results were also reported in bilinguals who use different scripts (Giezen et al., 2015;Mishra & Singh, 2014Shook & Marian, 2012). ...
Article
Full-text available
Extensive evidence has demonstrated that bilinguals non-selectively activate lexicons of both languages when reading or hearing words in one language. Here, we further investigated the electrophysiological roles of cross-linguistic orthography and phonology in the processing of L2 spoken words in unbalanced Chinese (L1)–English (L2) bilinguals in a cross-modal situation. Relative to unrelated control, the recognition of auditory L2 words showed behavioral interference effects when paired with orthographic or phonological neighbors of the correct translations of L2 words. Moreover, the lexical effects were also exhibited in the electrophysiological data, as reflected by marginally less positive late positive component (500–800 ms) amplitudes in the frontal region. Importantly, the orthographic rather than phonological translation neighbor condition elicited less negative N400 (300–500 ms) amplitudes in the parietal–occipital regions, suggesting that this orthographic translation neighbor condition facilitated the co-activation of spoken L2 words. Taken together, these findings indicate that cross-linguistic orthographic and phonological activation have different temporal dynamics with both bottom-up parallel cross-linguistic activation and the top-down inhibitory control mechanism governing the two-language lexical organization in L2 spoken word recognition.
... The view that bilingualism afects cognition rests on the assumption that a key aspect of the bilingual experience is language processing under conditions of constant language co-activation (Shook & Marian, 2019;Thierry & Wu, 2007). Consequently, for any activity in which language is relevant, bilinguals need to draw on a cognitive control mechanism that enables selecting the appropriate language by resolving the between-language competition. ...
... Besides these ties to phonology, fingerspelling also provides an alternative way to encode orthographic information aside from writing. Orthographic representations may be shared between words and signs; especially since there is no written form of ASL, fingerspelling may strengthen these representations and facilitate language co-activation in sign-print bilinguals (Shook and Marian 2019). Evidence for shared orthographic representations for fingerspelling and print can be seen in the fact that fingerspelling skill correlates with spelling skill (Sehyr and Emmorey 2022) and activates the left mid-fusiform gyrus, known functionally as the Visual Word Form Area (VWFA). ...
Article
Full-text available
Fingerspelling is a critical component of many sign languages. This manual representation of orthographic code is one key way in which signers engage in translanguaging, drawing from all of their linguistic and semiotic resources to support communication. Translanguaging in bimodal bilinguals is unique because it involves drawing from languages in different modalities, namely a signed language like American Sign Language and a spoken language like English (or its written form). Fingerspelling can be seen as a unique product of the unified linguistic system that translanguaging theories purport, as it blends features of both sign and print. The goals of this paper are twofold: to integrate existing research on fingerspelling in order to characterize it as a cognitive-linguistic phenomenon and to discuss the role of fingerspelling in translanguaging and communication. We will first review and synthesize research from linguistics and cognitive neuroscience to summarize our current understanding of fingerspelling, its production, comprehension, and acquisition. We will then discuss how fingerspelling relates to translanguaging theories and how it can be incorporated into translanguaging practices to support literacy and other communication goals.
... Executive functioning refers to a set of higher-order cognitive processes that are responsible for selfcontrol and goal-oriented behavior, including inhibition, shifting of attention, and working memory (Diamond, 2013;Zelazo & Carlson, 2012). It has been proposed that to successfully communicate in the target language, bilinguals recruit domain-general executive functions to resolve the conflict that arises from two jointly activated language systems (e.g., Marian & Spivey, 2003;Shook & Marian, 2019;Thierry & Wu, 2007; for a review see Kroll, 2017). The need for bilinguals to maintain attention on the target language while ignoring the non-target language is thought to lead to enhanced executive functioning (Bialystok, 2015(Bialystok, , 2017. ...
Article
Full-text available
Bilingual children have better Theory-of-Mind compared to monolingual children, but comparatively little research has examined whether this advantage in social cognitive ability also applies to adults. The current study investigated whether multilingual status and/or number of known languages predicts performance on a mentalizing task in a large sample of adult participants. Multilingualism was decomposed based on whether English is the first language or not. All analyses controlled for well-known predictors of mentalizing, such as gender, same-race bias, and years of English fluency. We found a U-shaped trend, such that monolinguals and multilinguals did not differ much in their mentalizing ability, but bilinguals performed worse than monolinguals. Our study builds upon past work by examining a large sample of participants, measuring a crucial aspect of adult social cognition that has previously been unexplored, controlling for several nuisance variables, and investigating whether multilingualism leads to additional benefits in mentalizing abilities beyond bilingualism.
... Within the bilingual system, connections are established through the simultaneous activations of related words (Kroll et al., 2010). As a result, starting from a young age, bilinguals often find themselves considering both within-and between-language competitors when selecting a word Shook & Marian, 2019). Study 2's new bilingual findings demonstrate that these bilingual connections extend to sub-lexical components, namely lexical morphology. ...
Thesis
Early childhood language experiences influence how a child’s mind and brain process language and acquire literacy. For children growing up bilingual, their two languages interact in their minds, and these cross-linguistic influences can lead to unique neurocognitive mechanisms for language and reading compared to monolinguals. In this dissertation, I asked: how does early bilingualism impact children’s cognitive and neural organization for learning to read? To address this question, this dissertation includes three inter-related studies of behavioral and functional Near-Infrared Spectroscopy (fNIRS) neuroimaging assessments with young Chinese-English bilingual, Spanish-English bilingual, and English monolingual children (N = 283, ages 5-11). Children completed language and literacy tasks in each of their respective languages while their parents completed questionnaires on children’s language use. Studies 1 and 2 revealed that bilingual children processed English words in a way that reflected their proficiency with the characteristics of their heritage language – in particular, Chinese bilinguals relied more on meaning-based skills, whereas Spanish bilinguals on sound-based skills, to read English words (Study 1); and in both groups, higher heritage language proficiency is associated with stronger activations in the left temporal brain region when processing English words (Study 2). Study 3 further revealed that bilinguals with stronger dual language proficiency formed more widespread neural connections within the neural network for language processing. Taken together, the findings suggest that heritage language influences children’s literacy development and emerging neural organization for learning to read and these bilingual influences reflect children’s experiences and proficiency with both of their languages. These findings thus inform theories of bilingualism and literacy instruction for children from diverse socio-linguistic backgrounds.
... The parallel activation of the two languages of a bilingual is well-documented. For instance, Shook and Marian (2019) found that English-Spanish speakers looked at the image of a shovel more than the unrelated distractors when asked to click on a "duck" in English because the two words were phonologically similar in Spanish (pato and pala). The results replicated the classic findings of the Russian-English pair ("marku-marker") from the same author (Spivey and Marian 1999). ...
Article
Full-text available
Previous studies on the comprehension of code-switched sentences often neglected the code-switching habit of the specific community, so that the processing difficulty might not have resulted from the change in language but from unnatural switching. This study explores the processing cost of habitual and nonhabitual code-switching. Thirty-one young adults participated in the sentence-reading task with their eye movement tracked. A two-by-two factorial design was used, with Habit (habitual/nonhabitual) and Language (unilingual/code-switched) as the factors. The main effect of Language was observed only in First Fixation Duration, suggesting that the language membership was already identified in an early processing stage. However, for habitual switches, no switching cost in overall processing effort was found, as reflected by Total Fixation Duration and Visit Counts. Our results indicate that the cognitive load was only larger when the switch occurred nonhabitually, regardless of the language membership. In light of this finding, we propose that habitual code-switching might promote the formation of bilingual collocations, or prefabs, which are then integrated into the mental lexicon of the dominant language. Despite a conscious language tag of a foreign origin, these bilingual prefabs are not processed as a language switch in the lexicon. View Full-Text
... Within the bilingual system, connections are established through the simultaneous activations of related words (Kroll et al., 2010). As a result, starting from a young age, bilinguals often find themselves considering both within-and betweenlanguage competitors when selecting a word (Arredondo et al., 2019;Shook & Marian, 2019). Our new bilingual findings demonstrate that these bilingual connections extend to sub-lexical components, namely lexical morphology. ...
Article
How do early bilingual experiences influence children's neural architecture for word processing? Dual language acquisition can yield common influences that may be shared across different bilingual groups, as well as language‐specific influences stemming from a given language pairing. To investigate these effects, we examined bilingual English speakers of Chinese or Spanish, and English monolinguals, all raised in the US (N = 152, ages 5–10). Children completed an English morphological word processing task during fNIRS neuroimaging. The findings revealed both language‐specific and shared bilingual effects. The language‐specific effects were that Chinese and Spanish bilinguals showed principled differences in their neural organization for English lexical morphology. The common bilingual effects shared by the two groups were that in both bilingual groups, increased home language proficiency was associated with stronger left superior temporal gyrus (STG) activation when processing the English word structures that are most dissimilar from the home language. The findings inform theories of language and brain development during the key periods of neural reorganization for learning to read by illuminating experience‐based plasticity in linguistically diverse learners. This article is protected by copyright. All rights reserved
... Within the bilingual system, connections are established through the simultaneous activations of related words (Kroll, 2010). As a result, starting from a young age, bilinguals often find themselves considering both within-and between-language competitors when selecting a word (Arredondo et al., 2019;Shook & Marian, 2019). Our new bilingual findings demonstrate that these bilingual connections extend to sub-lexical components, namely lexical morphology. ...
Preprint
How do early bilingual experiences influence children’s neural architecture for word processing? Dual language acquisition can yield universal influences that generalize across bilinguals, as well as language-specific influences stemming from a given language pairing. To investigate these effects, we examined bilingual English speakers of Chinese or Spanish, and English monolinguals, all raised in the US (N = 152, ages 5-10). Children completed an English morphological word processing task during fNIRS neuroimaging. The findings revealed both language-specific and universal bilingual effects. The language-specific effects were that Chinese and Spanish bilinguals showed principled differences in their neural organization for English lexical morphology. The universal bilingual effects were that in both bilingual groups, increased home language proficiency was associated with stronger left superior temporal gyrus (STG) activation when processing the word structures that are most dissimilar from English. The findings inform theories of language and brain development by illuminating experience-based plasticity of the developing brain in children with diverse linguistic experiences.
... However, electrophysiological results suggest that L1-dominant bilinguals can implement different neurocognitive mechanisms in the use of L1 and L2 phonological information when learning novel words in an L3. The present findings are in line with previous studies demonstrating that bilinguals seem to access and activate their unused language during speech comprehension (FitzPatrick and Indefrey, 2014;Shook and Marian, 2019). Our results suggest that L1-dominant bilinguals can make use of both of their languages to learn L3 novel words, even when one of them is not explicitly present in the learning situation. ...
Article
This study investigated the influence of phonological word representations from both first language (L1) and second language (L2) on third language (L3) lexical learning in L1-dominant Spanish–English bilinguals. More specifically, we used event-related potentials (ERPs) to determine whether L1 Spanish and L2 English phonology modulates bilinguals’ brain response to newly learned L3 Slovak words, some of which had substantial phonological overlap with either L1 or L2 words (interlingual homophones) in comparison to matched control words with little or no phonological overlap. ERPs were recorded from a group of 20 Spanish–English bilinguals in response to 120 auditory Slovak words, both before and after a three-day-long learning period during which they associated the L3 Slovak novel words with their L1 Spanish translations. Behaviorally, both L1 Spanish and L2 English homophony facilitated the learning of L3 Slovak words in a similar manner. In contrast, the electrophysiological results of the post-training ERPs, but not the pre-training ERPs, showed an N100 effect for L2 English interlingual homophones and opposite N400 effects for L1 Spanish and L2 English interlingual homophones in comparison to control words. These findings suggest different neurocognitive mechanisms in the use of L1 and L2 phonological information when learning novel words in an L3.
... Although, we used the dictation task instead of translation and the blocked design instead of intermixing languages across trials to avoid the direct activation of the non-intended language, it could still be argued that the use of both languages in the same experimental session could have enhanced language co-activation. However, recent studies have shown that language co-activation occurs even under very stringent single-language contexts, and even when language use is limited to the dominant language (Shook & Marian, 2019;Bobb, Von Holzen, Mayor, Mani & Carreiras, 2020). In the following subsections, we will discuss the evidence regarding language coactivation, the time course of lexical and sublexical activation, and finally, some issues regarding language differences. ...
Article
Full-text available
Bilinguals’ two languages seem to be coactivated in parallel during reading, speaking, and listening. However, this coactivation in writing has been scarcely studied. This study aimed to assess orthographic coactivation during spelling-to-dictation. We took advantage of the presence of polyvalent graphemes in Spanish (one phonological representation with two orthographic specifications, e.g., / b /for both the graphemes v and b) to manipulate orthographic congruency. Spanish–English bilinguals were presented with cross-linguistic congruent (mo v ement–mo v imiento) and incongruent words (go v ernment–go b ierno) for a dictation task. The time and accuracy to initiate writing and to type the rest-of-word (lexical and sublexical processing) were recorded in both the native language (L1) and the second language (L2). Results revealed no differences between conditions in monolinguals. Bilinguals showed a congruency and language interaction with better performance for congruent stimuli, which was evident from the beginning of typing in L2. Language coactivation and lexical–sublexical interaction during bilinguals’ writing are discussed.
... Interestingly, although the linguistic level of overlap in stimuli (phonetic vs. phonemic) meant to probe cross-linguistic coactivation is important to both theory-building and experimental design, it has not received much attention in the empirical literature. Most studies report the number of overlapping phonemes Canseco-Gonzalez et al., 2010;Ju & Luce, 2004;Marian & Spivey, 2003a, 2003bShook & Marian, 2017), and/or phonetic features Canseco-Gonzalez et al., 2010;Marian & Spivey, 2003b), to quantify the overlap and to keep the amount of overlap constant between experimental conditions. However, the precise way in which overlap is defined, and its extent may fundamentally dictate the extent of crosslinguistic activation observed. ...
Article
Activation of both of a bilingual’s languages during auditory word recognition has been widely documented. Here, we argue that if parallel activation in bilinguals is the result of a bottom-up process where phonetic features that overlap across two languages activate both linguistic systems, then the robustness of such parallel activation is in fact surprising. This is because phonemes across two different languages are rarely perfectly matched to each other in phonetic features. For instance, across Spanish and English, a “voiced” stop is realized in phonetically-distinct ways, and therefore, words that begin with voiced stops in English do not in fact fully overlap in phonetic features with words in Spanish. In two eye-tracking experiments using a visual world paradigm, we examined the effect of a phonemic match (English /b/ matched to Spanish /b/) vs. a phonetic match (English /b/ matched to Spanish /p/) on cross-linguistic co-activation (English words co-activating Spanish) in Spanish L1 and in Spanish L2 speakers. We found that while phonemic matching induced co-activation in both Spanish L1 and Spanish L2 speakers, phonetic matching did not. Together, these results indicate that co-activation of two languages in bilinguals may proceed through activation of categorical phonemic information rather than through activation of phonetic features.
Article
Full-text available
This study investigated (a) whether L2 semantic processing is modulated by automatic activation of L1 translations, (b) whether L1 translation activation involves both phonological and orthographic representations, and (c) whether these phonological and orthographic representations of L1 translations are accessed along a similar time course. To this end, 48 Hebrew–English bilinguals and 48 native English speakers with no Hebrew knowledge performed a semantic relatedness judgment task in English. Critical prime–target pairs (n = 96) were semantically unrelated, but their translations in Hebrew could include form overlap. Specifically, complete translation-overlap pairs shared both a phonological and an orthographic lexical form (e.g., “beak” and “source” = מקור /makor/), whereas partial translation-overlap pairs shared either a phonological form (e.g., “skin” and “light” = /or/) or an orthographic form (e.g., “book” and “barber” = ספר) in Hebrew. Stimulus onset asynchrony (SOA) of the prime–target L2-English words was further manipulated to reveal the time course of phonological and orthographic translation activation. Results showed that complete overlap in the translation lead Hebrew–English bilinguals, but not native English speakers, to judge semantically unrelated pairs as related in meaning and to do so more quickly irrespective of SOA. For partial translation overlap in phonology, the percentage of “yes” responses was affected only in the short SOA (300 ms), and under partial translation overlap in orthography, only in the long SOA (750 ms). These findings suggest that L1 translation activation during L2 word processing spreads to both phonological and orthographic representations but at different time points along processing.
Chapter
The chapter considers how language sparks discovery and innovation by examining creativity and problem-solving through the unique vantage point of multilingualism. The chapter begins with an overview of how creativity and problem-solving are operationalized and measured, followed by a review of how multilingualism impacts the ability to innovate and solve problems. The relationship between multilingualism and creativity is modulated by proficiency and age of second language acquisition. Similarly, performance on problem-solving tasks depends on which language multilinguals use and on their proficiency level in each language. The final section discusses multilingualism, creativity, and problem-solving in real-world settings, as well as potential future directions, concluding with the suggestion that knowing multiple languages can lead to more creative outcomes and better problem-solving skills.
Chapter
Suppose we are good tennis players and want to learn to play ping-pong. Does the way we play tennis affect how we play ping-pong? Would we play ping-pong in the same way if we were not tennis experts? This was one of Albert’s recurring metaphors when drawing a line of thought toward language interactions in bilingual language processing. The argument behind the anecdote referred to what extent the sustained interaction between bilinguals’ two languages results in structural changes within the language network. This chapter aims to push the tennis metaphor one step further by asking whether playing tennis affects how we play football, a sport involving quite different skills. Bringing the sports metaphor into language, this chapter reviews interactions occurring between bilinguals’ two languages involving different articulatory and perceptual mechanisms, such as sign and oral languages. This chapter is then devoted to bimodal bilingualism, reviewing the most relevant results on cross-linguistic interactions across modalities.
Article
The present study used a masked implicit priming paradigm to test if L1 to L2 translation occurs automatically and rapidly. Korean-English bilinguals performed a lexical decision task when English L2 targets (e.g., FACE) were translation equivalent to the L1 prime (얼굴 elkwul meaning 'face') or had phonological overlap with its translation to varying degrees: moderate (FAKE), minimal (FOOL), or unrelated. The translation equivalent targets resulted in N250 and N400 attenuation, reflecting facilitation in sublexical and lexical mapping of the target words, respectively. Crucially, target words which were phonologically related to the implicitly activated translation equivalent (face-FAKE/FOOL) also demonstrated N250/N400 modulation in the absence of semantic overlap. Additionally, the pattern of effects obtained against the unrelated condition differed between the implicitly related primes, with greater phonological overlap resulting in increased negativity, while minimal overlap led to attenuation. These findings suggest translation via direct lexical association occurring automatically at earlier stages of visual word recognition prior to lexical selection in bilinguals.
Article
Full-text available
Language can have a powerful effect on how people experience events. Here, we examine how the languages people speak guide attention and influence what they remember from a visual scene. When hearing a word, listeners activate other similar-sounding words before settling on the correct target. We tested whether this linguistic coactivation during a visual search task changes memory for objects. Bilinguals and monolinguals remembered English competitor words that overlapped phonologically with a spoken English target better than control objects without name overlap. High Spanish proficiency also enhanced memory for Spanish competitors that overlapped across languages. We conclude that linguistic diversity partly accounts for differences in higher cognitive functions such as memory, with multilinguals providing a fertile ground for studying the interaction between language and cognition.
Chapter
Multilingualism affects cognitive, behavioral, and neural function across the lifespan. Here, we review the neuroimaging literature on bilingualism, multilingualism, and executive functions, focusing on three multilingual groups who rely on language control to varying degrees to overcome competition from other languages: third-language learners, multilingual adults, and simultaneous interpreters. In third-language learners, changes in brain regions underlying executive functions occur during the early stages of acquiring another language. In multilingual adults, effects of language experience reflect a qualitative difference between monolingual and multilingual processing rather than cumulative effects of increased linguistic knowledge. In simultaneous interpreters, changes in gray matter volume and white matter integrity are found in areas underlying language selection and executive functions, reflecting neural efficiency due to experience with rapid translation. The implications of these findings for our understanding of multilingualism and the value of moving beyond the monolingual–bilingual dichotomy are discussed.
Article
In bilingual word recognition, cross-language activation has been found in unimodal bilinguals (e.g., Chinese-English bilinguals) and bimodal bilinguals (e.g., American Sign language-English bilinguals). However, it remains unclear how signs' phonological parameters, spoken words' orthographic and phonological representation, and language proficiency affect cross-language activation in bimodal bilinguals. To resolve the issues, we recruited deaf Chinese sign language (CSL)-Chinese bimodal bilinguals as participants. We conducted two experiments with the implicit priming paradigm and the semantic relatedness decision task. Experiment 1 first showed cross-language activation from Chinese to CSL, and the CSL words' phonological parameter affected the cross-language activation. Experiment 2 further revealed inverse cross-language activation from CSL to Chinese. The Chinese words' orthographic and phonological representation played a similar role in the cross-language activation. Moreover, a comparison between Experiments 1 and 2 indicated that language proficiency influenced cross-language activation. The findings were further discussed with the Bilingual Interactive Activation Plus (BIA+) model, the deaf BIA+ model, and the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS) model.
Article
Many languages use the same letters to represent different sounds (e.g., the letter P represents /p/ in English but /r/ in Russian). We report two experiments that examine how native language experience impacts the acquisition and processing of words with conflicting letter-to-sound mappings. Experiment 1 revealed that individual differences in nonverbal intelligence predicted word learning and that novel words with conflicting orthography-to-phonology mappings were harder to learn when their spelling was more typical of the native language than less typical (due to increased competition from the native language). Notably, Experiment 2 used eye tracking to reveal, for the first time, that hearing non-native spoken words activates native language orthography and both native and non-native letter-to-sound mappings. These findings evince high interactivity in the language system, illustrate the role of orthography in phonological learning and processing, and demonstrate that experience with written form changes the linguistic mind.
Article
Full-text available
It is well established that access to the bilingual lexicon is non-selective: even in an entirely monolingual context, elements of the non-target language are active. Research has also shown that activation of the non-target language is greater at higher proficiency levels, suggesting that it may be proficiency that drives cross-language lexical activation. At the same time, the potential role of age of acquisition (AoA) in cross-language activation has gone largely unexplored, as most studies have either focused on adult L2 learners or have conflated AoA with L2 proficiency. The present study examines the roles of AoA and L2 proficiency in L2 lexical processing using the visual world paradigm. Participants were a group of early L1 Afrikaans–L2 English bilinguals (AoA 1–9 years) and a control group of L1 English speakers. Importantly, in the bilingual group, AoA and proficiency were not correlated. In the task, participants viewed a screen with four objects on it: a target object, a competitor object whose Afrikaans translation overlapped phonetically with the target object, and two unrelated distractor objects. The results show that the L2 English group was significantly more likely to look at the cross-language competitor than the L1 English group, thus providing evidence of cross-language activation. Importantly, the extent to which this activation occurred was modulated by both L2 proficiency and AoA. These findings suggest that while these two variables may have been confounded in previous research, they actually both exert effects on cross-language activation. The locus of this parallel activation effect is discussed in terms of connectionist models of bilingualism.
Article
Substantial research among bilingual adults indicates that exposure to words primes other semantically related words within and across languages, as well as the direct translation equivalents [e.g. Chen and Ng 1989. “Semantic Facilitation and Translation Priming Effects in Chinese-English Bilinguals.” Memory & Cognition 17: 454–462]. However, there is less research on semantic and translation priming among bilingual children. The purpose of this study was to evaluate semantic priming effects as an indicator of underlying lexical quality among Spanish-speaking dual language learners (DLLs) in the U.S., including examination of whether semantic and translation priming effects were related to children’s reading-related skills. Ninety-five Spanish-speaking DLLs in second and fourth grade completed an eye-tracking semantic priming task along with measures of English and Spanish reading-related skills. Results indicated that there were consistent translation priming effects, with observed translation priming stronger from English to Spanish than from Spanish to English. Additionally, there were consistent within-English semantic priming effects. Results suggested that semantic priming effects were stronger for children with higher levels of English vocabulary and reading comprehension than they were for children with lower levels of English vocabulary and reading comprehension. Findings are discussed in the context of theoretical models of bilingual language processing, as well as the lexical quality hypothesis [e.g. Perfetti 2007. “Reading Ability: Lexical Quality to Comprehension.” Scientific Studies of Reading 11: 357–383].
Article
In adult bilinguals, a word in one language will activate a related word in the other language, with language dominance modulating the direction of these effects. To determine whether the early bilingual lexicon possesses similar properties to its adult counterpart, two experiments compared translation equivalent priming and cross-linguistic semantic priming in 27-month-old bilingual toddlers learning English and one other language. Priming effects were found in both experiments, irrespective of language dominance and distance between the child’s two languages. The time course of target word recognition revealed a similar pattern for translation equivalent priming and cross-language semantic priming. These results suggest that the early bilingual lexicon possesses properties similar to the adult one in terms of word to concept connections. However, the absence of an advantage of translation equivalent priming over semantic priming, and the lack of dominance and language distance effects, suggest that when two languages are acquired in parallel during infancy, their integration within a single dynamic system is highly robust to input variations.
Article
Full-text available
Cambridge Core - Cognition - Bilingual Lexical Ambiguity Resolution - edited by Roberto R. Heredia
Article
Bilinguals’ two languages are both active in parallel, and controlling co-activation is one of bilinguals’ principle challenges. Trilingualism multiplies this challenge. To investigate how third language (L3) learners manage interference between languages, Spanish-English bilinguals were taught an artificial language that conflicted with English and Spanish letter-sound mappings. Interference from existing languages was higher for L3 words that were similar to L1 or L2 words, but this interference decreased over time. After mastering the L3, learners continued to experience competition from their other languages. Notably, spoken L3 words activated orthography in all three languages, causing participants to experience cross-linguistic orthographic competition in the absence of phonological overlap. Results indicate that L3 learners are able to control between-language interference from the L1 and L2. We conclude that while the transition from two languages to three presents additional challenges, bilinguals are able to successfully manage competition between languages in this new context.
Article
Full-text available
This study investigates cross-language and cross-modal activation in bimodal bilinguals. Two groups of hearing bimodal bilinguals, natives (Experiment 1) and late learners (Experiment 2), for whom spoken Spanish is their dominant language and Spanish Sign Language (LSE) their non-dominant language, performed a monolingual semantic decision task with word pairs heard in Spanish. Half of the word pairs had phonologically related signed translations in LSE. The results showed that bimodal bilinguals were faster at judging semantically related words when the equivalent signed translations were phonologically related while they were slower judging semantically unrelated word pairs when the LSE translations were phonologically related. In contrast, monolingual controls with no knowledge of LSE did not show any of these effects. The results indicate cross-language and cross-modal activation of the non-dominant language in hearing bimodal bilinguals, irrespective of the age of acquisition of the signed language.
Article
Full-text available
Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Article
Full-text available
Using a variant of the visual world eye tracking paradigm, we examined if language non- selective activation of translation equivalents leads to attention capture and distraction in a visual task in bilinguals. High and low proficient Hindi-English speaking bilinguals were instructed to programme a saccade towards a line drawing which changed colour among other distractor objects. A spoken word, irrelevant to the main task, was presented before the colour change. On critical trials, one of the line drawings was a phonologically related word of the translation equivalent of the spoken word. Results showed that saccade latency was significantly higher towards the target in the presence of this cross-linguistic translation competitor compared to when the display contained completely unrelated objects. Participants were also slower when the display contained the referent of the spoken word among the distractors. However, the bilingual groups did not differ with regard to the interference effect observed. These findings suggest that spoken words activates translation equivalent which bias attention leading to interference in goal directed action in the visual domain.
Article
Full-text available
We investigated whether speaking in one language affects cross-and within-language activation when subsequently switching to a task performed in the same or different language. English–French bilinguals (L1 English, n = 29; L1 French, n = 28) were randomly assigned to a prior language context condition consisting of a spontaneous production task in English (the no-switch group) or in French (the switch group). Participants then performed an English spoken language comprehension task using the visual world method. The key result was that the switch group showed less evidence of cross-language competition than the no-switch group, consistent with the notion of an active inhibition of a prior language in the switch group. These data suggest that proficient bilinguals can globally suppress a non-target language, whether it is L1 or L2, though doing so requires cognitive resources that may be diverted from other demands, such as controlling within-language competition. Bilinguals occasionally experience thinking that the spoken words they hear belong to one known language when in fact they belong to another. Such confusions likely arise because bilinguals simultaneously activate multiple languages during spoken word recognition (e.g.
Article
Full-text available
Lexical access was examined in English–Spanish bilinguals by monitoring eye fixations on target and lexical competitors as participants followed spoken instructions in English to click on one of the objects presented on a computer (e.g., ‘Click on the beans’). Within-language lexical competitors had a phoneme onset in English that was shared with the target (e.g., ‘beetle’). Between-language lexical competitors had a phoneme onset in Spanish that was shared with the target (‘bigote’, ‘mustache’ in English). Participant groups varied in their age-of-acquisition of English and Spanish, and were examined in one of three language modes (Grosjean, 1998, 2001). A strong within-language (English) lexical competition (or cohort effect) was modulated by language mode and age of second language acquisition. A weaker between-language (Spanish) cohort effect was influenced primarily by the age-of-acquisition of Spanish. These results highlight the role of age-of-acquisition and mode in language processing. They are discussed in comparison to previous studies addressing the role of these two variables and in terms of existing models of bilingual word recognition.
Data
Full-text available
Timed picture naming was compared in seven languages that vary along dimensions known to affect lexical access. Analyses over items focused on factors that determine cross-language universals and cross-language disparities. With regard to universals, number of alternative names had large effects on reaction time within and across languages after target-name agreement was controlled, suggesting inhibitory effects from lexical competitors. For all the languages, word frequency and goodness of depiction had large effects, but objective picture complexity did not. Effects of word structure variables (length, syllable structure, compounding, and initial frication) varied markedly over languages. Strong cross-language correlations were found in naming latencies, frequency, and length. Other-language frequency effects were observed (e.g., Chinese frequencies predicting Spanish reaction times) even after within-language effects were controlled (e.g., Spanish frequencies predicting Spanish reaction times). These surprising cross-language correlations challenge widely held assumptions about the lexical locus of length and frequency effects, suggesting instead that they may (at least in part) reflect familiarity and accessibility at a conceptual level that is shared over languages.
Article
Full-text available
Recent studies have shown that word frequency estimates obtained from films and television subtitles are better to predict performance in word recognition experiments than the traditional word frequency estimates based on books and newspapers. In this study, we present a subtitle-based word frequency list for Spanish, one of the most widely spoken languages. The subtitle frequencies are based on a corpus of 41M words taken from contemporary movies and TV series (screened between 1990 and 2009). In addition, the frequencies have been validated by correlating them with the RTs from two megastudies involving 2,764 words each (lexical decision and word naming tasks). The subtitle frequencies explained 6% more of the variance than the existing written frequencies in lexical decision, and 2% extra in word naming. Word frequency, together with age of acquisition, is considered to be the most important variable in word comprehension and production: Words encountered often in life are processed more efficiently than words rarely encountered. Any study involving the perception or the production of words, be they on healthy individuals or on clinical samples (aphasia, Alzheimer's dementia, dyslexia, etc.), have to consider this variable. Therefore, researchers require good dictionaries that allow them to select words according to their frequency. Any language without a good word frequency measure is seriously disadvantaged when it comes to psycholinguistic research.
Article
Full-text available
During spoken word-recognition, bilinguals have been shown to access their two languages simultaneously. The present study examined effects of language proficiency and lexical status on parallel language activation. Language proficiency was manipulated by testing German-native and English-native bilingual speakers of German and English. Lexical status was manipulated by presenting target words that either overlapped in form across translation equivalents (cognate words) or did not overlap in form across translation equivalents (English-specific words). Participants identified targets (such as hen) from picture-displays that also included similar-sounding German competitor words (such as Hemd, shirt). Eye-movements to German competitors were used to index co-activation of German. Results showed that both bilingual groups co-activated German while processing cognate targets; however, only German-native bilinguals co-activated German while processing English-specific targets. These findings indicate that high language proficiency and cognate status boost parallel language activation in bilinguals.
Article
Full-text available
Masked translation priming between languages with different scripts exhibits a marked asymmetry in lexical decision, with much stronger priming from L1 to L2 than from L2 to L1. This finding was confirmed in a lexical decision task with Chinese–English bilinguals who were late learners of English. Following a suggestion made by Bradley (1991), the experiment was repeated using a speeded episodic recognition task. Participants studied Chinese words, and then were tested in an old/new classification task in which Chinese target words were primed by masked English translation equivalents. Significant priming was obtained for old items, not for new items. However, no priming was obtained when lexical decision was used. Unexpectedly, the episodic task showed a reverse asymmetry, since L1–L2 priming was not obtained with this task, although strong effects were obtained for lexical decision. A possible explanation for this pattern of results is that knowledge of L2 lexical items is represented episodically for late learners.
Article
Full-text available
In this study, we investigated automatic translation from English to Chinese and subsequent morphological decomposition of translated Chinese compounds. In two lexical decision tasks, Chinese-English bilinguals responded to English target words that were preceded by masked unrelated primes presented for 59 ms. Unbeknownst to participants, the Chinese translations of the words in each critical pair consisted of a fully opaque compound word (i.e., a compound with two constituent morphemes that were semantically unrelated to the compound) and a monomorphemic word that was either the first or the second morpheme of the compound. The data revealed that bilinguals responded faster to English word pairs whose Chinese translations repeated the first morpheme than to English word pairs whose Chinese translations did not repeat the first morpheme, but no effect of hidden second-morpheme repetition was found. This effect of hidden first-morpheme repetition suggests that participants translated English words to Chinese and decomposed the translated compounds into their constituent morphemes. Because the primes were presented for only 59 ms, translation and morphological decomposition must be fast and automatic.
Article
Full-text available
How do the two languages of bilingual individuals interact in everyday communication? Numerous behavioral- and event-related brain potential studies have suggested that information from the non-target language is spontaneously accessed when bilinguals read, listen, or speak in a given language. While this finding is consistent with predictions of current models of bilingual processing, most paradigms used so far have mixed the two languages by using language ambiguous stimuli (e.g., cognates or interlingual homographs) or explicitly engaging the two languages because of experimental task requirements (e.g., word translation or language selection). These paradigms will have yielded different language processing contexts, the effect of which has seldom been taken into consideration. We propose that future studies should test the effect of language context on cross-language interactions in a systematic way, by controlling and manipulating the extent to which the experiment implicitly or explicitly prompts activation of the two languages.
Article
Full-text available
Bilingual individuals have been shown to access their native language while reading in or listening to their other language. However, it is unknown what type of mental representation (e.g., sound or spelling) they retrieve. Here, using event-related brain potentials, we demonstrate unconscious access to the sound form of Chinese words when advanced Chinese-English bilinguals read or listen to English words. Participants were asked to decide whether or not English words presented in pairs were related in meaning; they were unaware of the fact that some of the unrelated word pairs concealed either a sound or a spelling repetition in their Chinese translations. Whereas spelling repetition in Chinese translations had no effect, concealed sound repetition significantly modulated event-related brain potentials. These results suggest that processing second language activates the sound, but not the spelling, of native language translations.
Article
Full-text available
Word frequency is the most important variable in research on word processing and memory. Yet, the main criterion for selecting word frequency norms has been the availability of the measure, rather than its quality. As a result, much research is still based on the old Kucera and Francis frequency norms. By using the lexical decision times of recently published megastudies, we show how bad this measure is and what must be done to improve it. In particular, we investigated the size of the corpus, the language register on which the corpus is based, and the definition of the frequency measure. We observed that corpus size is of practical importance for small sizes (depending on the frequency of the word), but not for sizes above 16-30 million words. As for the language register, we found that frequencies based on television and film subtitles are better than frequencies based on written sources, certainly for the monosyllabic and bisyllabic words used in psycholinguistic research. Finally, we found that lemma frequencies are not superior to word form frequencies in English and that a measure of contextual diversity is better than a measure based on raw frequency of occurrence. Part of the superiority of the latter is due to the words that are frequently used as names. Assembling a new frequency norm on the basis of these considerations turned out to predict word processing times much better than did the existing norms (including Kucera & Francis and Celex). The new SUBTL frequency norms from the SUBTLEX(US) corpus are freely available for research purposes from http://brm.psychonomic-journals.org/content/supplemental, as well as from the University of Ghent and Lexique Web sites.
Article
Full-text available
The present study investigated cross-language priming effects with unique noncognate translation pairs. Unbalanced Dutch (first language [L1])-English (second language [L2]) bilinguals performed a lexical decision task in a masked priming paradigm. The results of two experiments showed significant translation priming from L1 to L2 (meisje-girl) and from L2 to L1 (girl-meisje), using two different stimulus onset asynchronies (SOAs) (250 and 100 msec). Although translation priming from L1 to L2 was significantly stronger than priming from L2 to L1, the latter was significant as well. Two further experiments with the same word targets showed significant cross-language semantic priming in both directions (jongen [boy]-girl; boy-meisje [girl]) and for both SOAs. These data suggest that L1 and L2 are represented by means of a similar lexico-semantic architecture in which L2 words are also able to rapidly activate semantic information, although to a lesser extent than L1 words are able to. This is consistent with models assuming quantitative rather than qualitative differences between L1 and L2 representations.
Article
Full-text available
Measuring event-related potentials (ERPs) has been fundamental to our understanding of how language is encoded in the brain. One particular ERP response, the N400 response, has been especially influential as an index of lexical and semantic processing. However, there remains a lack of consensus on the interpretation of this component. Resolving this issue has important consequences for neural models of language comprehension. Here we show that evidence bearing on where the N400 response is generated provides key insights into what it reflects. A neuroanatomical model of semantic processing is used as a guide to interpret the pattern of activated regions in functional MRI, magnetoencephalography and intracranial recordings that are associated with contextual semantic manipulations that lead to N400 effects.
Article
Full-text available
Hebrew-English cognates (translations similar in meaning and form) and noncognates (translations similar in meaning only) were examined in masked translation priming. Enhanced priming for cognates was found with L1 (dominant language) primes, but unlike previous results, it was not found with L2 (nondominant language) primes. Priming was also obtained for noncognates, whereas previous studies showed unstable effects for such stimuli. The authors interpret the results in a dual-lexicon model by suggesting that (a) both orthographic and phonological overlap are needed to establish shared lexical entries for cognates (and hence also symmetric cognate priming), and (b) script differences facilitate rapid access by providing a cue to the lexical processor that directs access to the proper lexicon, thus producing stable noncognate priming. The asymmetrical cognate effect obtained with different scripts may be attributed to an overreliance on phonology in L2 reading.
Article
Full-text available
Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention.
Article
Full-text available
Whether the native language of bilingual individuals is active during second-language comprehension is the subject of lively debate. Studies of bilingualism have often used a mix of first- and second-language words, thereby creating an artificial “dual-language” context. Here, using event-related brain potentials, we demonstrate implicit access to the first language when bilinguals read words exclusively in their second language. Chinese–English bilinguals were required to decide whether English words presented in pairs were related in meaning or not; they were unaware of the fact that half of the words concealed a character repetition when translated into Chinese. Whereas the hidden factor failed to affect behavioral performance, it significantly modulated brain potentials in the expected direction, establishing that English words were automatically and unconsciously translated into Chinese. Critically, the same modulation was found in Chinese monolinguals reading the same words in Chinese, i.e., when Chinese character repetition was evident. Finally, we replicated this pattern of results in the auditory modality by using a listening comprehension task. These findings demonstrate that native-language activation is an unconscious correlate of second-language comprehension. • bilingualism • event-related potentials • language access • semantic priming • unconscious priming
Article
Full-text available
To develop a reliable and valid questionnaire of bilingual language status with predictable relationships between self-reported and behavioral measures. In Study 1, the internal validity of the Language Experience and Proficiency Questionnaire (LEAP-Q) was established on the basis of self-reported data from 52 multilingual adult participants. In Study 2, criterion-based validity was established on the basis of standardized language tests and self-reported measures from 50 adult Spanish-English bilinguals. Reliability and validity of the questionnaire were established on healthy adults whose literacy levels were equivalent to that of someone with a high school education or higher. Factor analyses revealed consistent factors across both studies and suggested that the LEAP-Q was internally valid. Multiple regression and correlation analyses established criterion-based validity and suggested that self-reports were reliable indicators of language performance. Self-reported reading proficiency was a more accurate predictor of first-language performance, and self-reported speaking proficiency was a more accurate predictor of second-language performance. Although global measures of self-reported proficiency were generally predictive of language ability, deriving a precise estimate of performance on a particular task required that specific aspects of language history be taken into account. The LEAP-Q is a valid, reliable, and efficient tool for assessing the language profiles of multilingual, neurologically intact adult populations in research settings.
Article
Full-text available
The authors investigated semantic neighborhood density effects on visual word processing to examine the dynamics of activation and competition among semantic representations. Experiment 1 validated feature-based semantic representations as a basis for computing semantic neighborhood density and suggested that near and distant neighbors have opposite effects on word processing. Experiment 2 confirmed these results: Word processing was slower for dense near neighborhoods and faster for dense distant neighborhoods. Analysis of a computational model showed that attractor dynamics can produce this pattern of neighborhood effects. The authors argue for reconsideration of traditional models of neighborhood effects in terms of attractor dynamics, which allow both inhibitory and facilitative effects to emerge.
Article
Most models of lexical access assume that bilingual speakers activate their two languages even when they are in a context in which only one language is used. A critical piece of evidence used to support this notion is the observation that a given word automatically activates its translation equivalent in the other language. Here, we argue that these findings are compatible with a different account, in which bilinguals “carry over” the structure of their native language to the non-native language during learning, and where there is no activation of translation equivalents. To demonstrate this, we describe a model in which language learning involves mapping native language phonological relationships to the non-native language, and we show how it can explain the results attributed to automatic activation of translation equivalents.
Book
Growth Curve Analysis and Visualization Using R provides a practical, easy-to-understand guide to carrying out multilevel regression/growth curve analysis (GCA) of time course or longitudinal data in the behavioral sciences, particularly cognitive science, cognitive neuroscience, and psychology. With a minimum of statistical theory and technical jargon, the author focuses on the concrete issue of applying GCA to behavioral science data and individual differences. http://www.crcpress.com/product/isbn/9781466584327 http://www.danmirman.org/gca
Article
Performance of bilingual Russian–English speakers and monolingual English speakers during auditory processing of competing lexical items was examined using eye tracking. Results revealed that both bilinguals and monolinguals experienced competition from English lexical items overlapping phonetically with an English target item (e.g., spear and speaker). However, only bilingual speakers experienced competition from Russian competitor items overlapping crosslinguistically with an English target (e.g., spear and spichki, Russian for matches). English monolinguals treated the Russian competitors as they did any other filler items. This difference in performance between bilinguals and monolinguals tested with exactly the same sets of stimuli suggests that eye movements to a crosslinguistic competitor are due to activation of the other language and to between-language competition rather than being an artifact of stimulus selection or experimental design.
Article
During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension.
Article
Language non-selective lexical access in bilinguals has been established mainly using tasks requiring explicit language processing. Here, we show that bilinguals activate native language translations even when words presented in their second language are incidentally processed in a nonverbal, visual search task. Chinese-English bilinguals searched for strings of circles or squares presented together with three English words (i.e., distracters) within a 4-item grid. In the experimental trials, all four locations were occupied by English words, including a critical word that phonologically overlapped with the Chinese word for circle or square when translated into Chinese. The eye-tracking results show that, in the experimental trials, bilinguals looked more frequently and longer at critical than control words, a pattern that was absent in English monolingual controls. We conclude that incidental word processing activates lexical representations of both languages of bilinguals, even when the task does not require explicit language processing.
Article
Three experiments are reported in which picture naming and bilingual translation were performed in the context of semantically categorized or randomized lists. In Experiments 1 and 3 picture naming and bilingual translation were slower in the categorized than randomized conditions. In Experiment 2 this category interference effect in picture naming was eliminated when picture naming alternated with word naming. Taken together, the results of the three experiments suggest that in both picture naming and bilingual translation a conceptual representation of the word or picture is used to retrieve a lexical entry in one of the speaker's languages. When conceptual activity is sufficiently great to activate a multiple set of corresponding lexical representations, interference is produced in the process of retrieving a single best lexical candidate as the name or translation. The results of Experiment 3 showed further that category interference in bilingual translation occurred only when translation was performed from the first language to the second language, suggesting that the two directions of translation engage different interlanguage connections. A model to account for the asymmetric mappings of words to concepts in bilingual memory is described. (C) 1994 Academic Press, Inc.
Article
Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target panda) than on less confusable distractors (beetle, given target bottle). English listeners showed no such viewing time difference. The confusability was asymmetric: given pencil as target, panda did not distract more than distinct competitors. Distractors with Dutch names phonologically related to English target names (deksel, Ôlid,Õ given target desk) also received longer fixations than distractors with phonologically unrelated names. Again, English listeners showed no differential effect. With the materials translated into Dutch, Dutch listeners showed no activation of the English words (desk, given target deksel). The results motivate two conclusions: native phonemic categories capture second-language input even when stored representations maintain a second-language distinction; and lexical competition is greater for non-native than for native listeners.
Article
Two eye-tracking experiments examined spoken language processing in Russian-English bilinguals. The proportion of looks to objects whose names were phonologically similar to the name of a target object in either the same language (within-language competition), the other language (between-language competition), or both languages at the same time (simultaneous competition) was compared to the proportion of looks in a control condition in which no objects overlapped phonologically with the target. Results support previous findings of parallel activation of lexical items within and between languages, but suggest that the magnitude of the between-language competition effect may vary across first and second languages and may be mediated by a number of factors such as stimuli, language background, and language mode.
Article
Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals' and English monolinguals' eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing item than at phonologically unrelated items and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension.
Article
The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
Article
A well-known asymmetry exists in the bilingual masked priming literature in which lexical decision is used: namely, masked primes in the dominant language (L1) facilitate decision times on targets in the less dominant language (L2), but not vice versa. In semantic categorization, on the other hand, priming is symmetrical. In Experiments 1–3 we confirm this task difference, finding robust masked L2–L1 translation priming in semantic categorization but not lexical decision. In formulating an account for these findings, we begin with the assumption of a representational asymmetry between L1 and L2 lexical semantic representations, such that L1 representations are richly populated and L2 representations are not. According to this representational account, L2–L1 priming does not occur in lexical decision because an insufficient proportion of the L1 lexical semantic representation is activated by the L2 prime. In semantic categorization, we argue that the semantic information recruited to generate a decision is restricted by the task category, and that this restriction enhances the effectiveness of the L2 prime. In Experiments 4–6, these assumptions were tested in a within-language setting by pairing many-sense words (e.g., “head”) with few-sense words (e.g., “skull”). In lexical decision, robust priming was obtained in the many-to-few direction (analogous to L1–L2), but, no priming was obtained in the few-to-many direction (analogous to L2–L1) using the same word pairs. Priming in semantic categorization, on the other hand, was obtained in both directions. We propose the Sense Model as a possible account of these findings.
Article
The sounds that make up spoken words are heard in a series and must be mapped rapidly onto words in memory because their elements, unlike those of visual words, cannot simultaneously exist or persist in time. Although theories agree that the dynamics of spoken word recognition are important, they differ in how they treat the nature of the competitor set-precisely which words are activated as an auditory word form unfolds in real time. This study used eye tracking to measure the impact over time of word frequency and 2 partially overlapping competitor set definitions: onset density and neighborhood density. Time course measures revealed early and continuous effects of frequency (facilitatory) and on set based similarity (inhibitory). Neighborhood density appears to have early facilitatory effects and late inhibitory effects. The late inhibitory effects are due to differences in the temporal distribution of similarity within neighborhoods. The early facilitatory effects are due to subphonemic cues that inform the listener about word length before the entire word is heard. The results support a new conception of lexical competition neighborhoods in which recognition occurs against a background of activated competitors that changes over time based on fine-grained goodness-of-fit and competition dynamics.
Article
Deaf bilinguals for whom American Sign Language (ASL) is the first language and English is the second language judged the semantic relatedness of word pairs in English. Critically, a subset of both the semantically related and unrelated word pairs were selected such that the translations of the two English words also had related forms in ASL. Word pairs that were semantically related were judged more quickly when the form of the ASL translation was also similar whereas word pairs that were semantically unrelated were judged more slowly when the form of the ASL translation was similar. A control group of hearing bilinguals without any knowledge of ASL produced an entirely different pattern of results. Taken together, these results constitute the first demonstration that deaf readers activate the ASL translations of written words under conditions in which the translation is neither present perceptually nor required to perform the task.
Article
The time course of cross-script translation priming and repetition priming was examined in two different scripts using a combination of the masked priming paradigm with the recording of event-related potentials (ERPs). Japanese-English bilinguals performed a semantic categorization task in their second language (L2) English and in their first language (L1) Japanese. Targets were preceded by a visually presented related (translation equivalent/repeated) or unrelated prime. The results showed that the amplitudes of the N250 and N400 ERP components were significantly modulated for L2-L2 repetition priming, L1-L2 translation priming, and L1-L1 repetition priming, but not for L2-L1 translation priming. There was also evidence for priming effects in an earlier 100-200 ms time window for L1-L1 repetition priming and L1-L2 translation priming. We argue that a change in script across primes and targets provides optimal conditions for prime word processing, hence generating very fast-acting translation priming effects when primes are in L1.
Article
The present study examines language effects in second language learners. In three experiments participants monitored a stream of words for occasional probes from one semantic category and ERPs were recorded to non-probe critical items. In Experiment 1 L1 English participants who were university learners of French saw two lists of words blocked by language, one in French and one in English. We observed a large effect of language that mostly affected amplitudes of the N400 component, but starting as early as 150 ms post-stimulus onset. A similar pattern was found in Experiment 2 with L1 French and L2 English, showing that the effect is due to language dominance and not language per se. Experiment 3 found that proficient French/English bilinguals exhibited a different pattern of language effects showing that these effects are modulated by proficiency. These results lend further support to the hypothesis that word recognition during the early phases of L2 acquisition in late learners of L2 involves a specific set of mechanisms compared with recognition of L1 words.
Article
The physical energy that we refer to as a word, whether in isolation or embedded in sentences, takes its meaning from the knowledge stored in our brains through a lifetime of experience. Much empirical evidence indicates that, although this knowledge can be used fairly flexibly, it is functionally organized in 'semantic memory' along a number of dimensions, including similarity and association. Here, we review recent findings using an electrophysiological brain component, the N400, that reveal the nature and timing of semantic memory use during language comprehension. These findings show that the organization of semantic memory has an inherent impact on sentence processing. The left hemisphere, in particular, seems to capitalize on the organization of semantic memory to pre-activate the meaning of forthcoming words, even if this strategy fails at times. In addition, these electrophysiological results support a view of memory in which world knowledge is distributed across multiple, plastic-yet-structured, largely modality-specific processing areas, and in which meaning is an emergent, temporally extended process, influenced by experience, context, and the nature of the brain itself.
Article
Spoken word recognition is characterized by multiple activation of sound patterns that are consistent with the acoustic-phonetic input. Recently, an extreme form of multiple activation was observed: Bilingual listeners activated words from both languages that were consistent with the input. We explored the degree to which bilingual multiple activation may be constrained by fine-grained acoustic-phonetic information. In a head-mounted eyetracking experiment, we presented Spanish-English bilinguals with spoken Spanish words having word-initial stop consonants with either English- or Spanish-appropriate voice onset times. Participants fixated interlingual distractors (nontarget pictures whose English names shared a phonological similarity with the Spanish targets) more frequently than control distractors when the target words contained English-appropriate voice onset times. These results demonstrate that fine-grained acoustic-phonetic information and a precise match between input and representation are critical for parallel activation of two languages.
Article
Much research in bilingualism has addressed the question of the extent to which lexical information is shared between languages. The present study investigated whether syntactic information is shared by testing if syntactic priming occurs between languages. Spanish-English bilingual participants described cards to each other in a dialogue game. We found that a participant who had just heard a sentence in Spanish tended to use the same type of sentence when describing the next card in English. In particular, English passives were considerably more common following a Spanish passive than otherwise. We use the results to extend current models of the representation of grammatical information to bilinguals.
Article
This article describes a Windows program that enables users to obtain a broad range of statistics concerning the properties of word and nonword stimuli, including measures of word frequency, orthographic similarity, orthographic and phonological structure, age of acquisition, and imageability. It is designed for use by researchers in psycholinguistics, particularly those concerned with recognition of isolated words. The program computes measures of orthographic similarity on line, either with respect to a default vocabulary of 30,605 words or to a vocabulary specified by the user. In addition to providing standard orthographic neighborhood measures, the program can be used to obtain information about other forms of orthographic similarity, such as transposed-letter similarity and embedded-word similarity. It is available, free of charge, from the following Web site: http://www.maccs.mq. edu.au/colin/N-Watch/.
Article
This article describes a Windows program that enables users to obtain a broad range of statistics concerning the properties of word and nonword stimuli in Spanish, including word frequency, syllable frequency, bigram and biphone frequency, orthographic similarity, orthographic and phonological structure, concreteness, familiarity, imageability, valence, arousal, and age-of-acquisition measures. It is designed for use by researchers in psycholinguistics, particularly those concerned with recognition of isolated words. The program computes measures of orthographic similarity online, with respect to either a default vocabulary of 31,491 Spanish words or a vocabulary specified by the user. In addition to providing standard orthographic and phonological neighborhood measures, the program can be used to obtain information about other forms of orthographic similarity, such as transposed-letter similarity and embedded-word similarity. It is available, free of charge, from the following Web site: www.maccs.mq.edu.au/-colin/B-Pal.
Article
The influence of phonological similarity on bilingual language processing was examined within and across languages in three experiments. Phonological similarity was manipulated within a language by varying neighborhood density, and across languages by varying extent of cross-linguistic overlap between native and non-native languages. In Experiment 1, speed and accuracy of bilinguals' picture naming were susceptible to phonological neighborhood density in both the first and the second language. In Experiment 2, eye-movement patterns indicated that the time-course of language activation varied across phonological neighborhood densities and across native/non-native language status. In Experiment 3, speed and accuracy of bilingual performance in an auditory lexical decision task were influenced by degree of cross-linguistic phonological overlap. Together, the three experiments confirm that bilinguals are sensitive to phonological similarity within and across languages and suggest that this sensitivity is asymmetrical across native and non-native languages and varies along the timecourse of word processing.
Cross-language interaction in unimodal and bimodal bilinguals. Paper presented at the 16th Conference of the European Society for Cognitive Psychology
  • J G Van Hell
  • E Ormel
  • J Van Der Loop
  • D Hermans
Van Hell, J. G., Ormel, E., van der Loop, J., & Hermans, D. (2009). Cross-language interaction in unimodal and bimodal bilinguals. Paper presented at the 16th Conference of the European Society for Cognitive Psychology. Cracow, Poland, September 2-5.
Does lexical activation flow from word meanings to word sounds during spoken word recognition? Poster presented at the 48th annual meeting of the
  • E Yee
  • S Thompson-Schill
Yee, E., & Thompson-Schill, S. (2007). Does lexical activation flow from word meanings to word sounds during spoken word recognition? Poster presented at the 48th annual meeting of the Psychonomic Society. Long Beach, CA. https://doi.org/10.1037/e527342012-716
Standard Deviations) for Target, Competitor, and Distractor items
  • Means
Means (Standard Deviations) for Target, Competitor, and Distractor items. No differences were found between conditions (ps > 0.1).
Is syntax separate or shared between languages? Cross-linguistic syntactic priming in Spanish-English bilinguals
  • Hartsuiker
  • Yee