Article

Covert Co-Activation of Bilinguals' Non-Target Language: Phonological Competition from Translations

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

When listening to spoken language, bilinguals access words in both of their languages at the same time; this co-activation is often driven by phonological input mapping to candidates in multiple languages during online comprehension. Here, we examined whether cross-linguistic activation could occur covertly when the input does not overtly cue words in the non-target language. When asked in English to click an image of a duck, English-Spanish bilinguals looked more to an image of a shovel than to unrelated distractors, because the Spanish translations of the words duck and shovel (pato and pala, respectively) overlap phonologically in the non-target language. Our results suggest that bilinguals access their unused language, even in the absence of phonologically overlapping input. We conclude that during bilingual speech comprehension, words presented in a single language activate translation equivalents, with further spreading activation to unheard phonological competitors. These findings support highly interactive theories of language processing.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Theoretical models of language coactivation are supported by empirical evidence from unimodal tasks with bilinguals in the auditory (Fitzpatrick & Indefrey, 2010;Weber & Cutler, 2006), visual (Chabal & Marian, 2015;Finkbeiner et al., 2004;Martín et al., 2010;Schoonbaert et al., 2009;Sunderman & Kroll, 2006;Thierry & Wu, 2007), and audio-visual modalities (Blumenfeld & Marian, 2013;Giezen et al., 2015;Ju & Luce, 2004;Marian & Spivey, 2003a, 2003bShook & Marian, 2019), suggesting that phonological overlap between words across languages leads to parallel activation (e.g., Marian & Spivey, 2003a, 2003bShook & Marian, 2013). ...
... p < 0.01). In addition, previous research has demonstrated that cross-linguistic competition occurs early on within the first 600 ms posttarget onset (Blumenfeld & Marian, 2013;Shook & Marian, 2019). We thus used a narrower time window (300-600 ms postsound onset) for followup analyses to confirm initial results based on visual inspection of the data. ...
... Visual inspection of the time course data suggested that monolinguals produced more looks to competitors and fillers overall early on. To further uncover whether this trend held for monolinguals and bilinguals in a narrower time window when cross-linguistic effects are typically present (Blumenfeld & Marian, 2013;Shook & Marian, 2019), we selected 300-600 ms postsound onset. These follow-up analyses revealed no main effects or interactions within the narrower time window. ...
Article
A bilingual’s language system is highly interactive. When hearing a second language (L2), bilinguals access native-language (L1) words that share sounds across languages. In the present study, we examine whether input modality and L2 proficiency moderate the extent to which bilinguals activate L1 phonotactic constraints (i.e., rules for combining speech sounds) during L2 processing. Eye movements of English monolinguals and Spanish–English bilinguals were tracked as they searched for a target English word in a visual display. On critical trials, displays included a target that conflicted with the Spanish vowel-onset rule (e.g., sp a ), as well as a competitor containing the potentially activated “e” onset (e.g., e gg ). The rule violation was processed either in the visual modality (Experiment 1) or audio-visually (Experiment 2). In both experiments, bilinguals with lower L2 proficiency made more eye movements to competitors than fillers. Findings suggest that bilinguals who have lower L2 proficiency access L1 phonotactic constraints during L2 visual word processing with and without auditory input of the constraint-conflicting structure (e.g., spa ). We conclude that the interactivity between a bilingual’s two languages is not limited to words that share form across languages, but also extends to sublexical, rule-based structures.
... However, there is also Figure 11.1 Example of a visual world display with a between-language competitor Studying Bilingualism Through Eye-Tracking and Brain Imaging 275 evidence of coactivation even when there is no phonological overlap between the spoken word and the competitor in the nontarget language. One example comes from Shook and Marian (2017), who found evidence of "covert activation" using the eye-tracking and visual world paradigm. When English-Spanish bilinguals were asked to click on a picture of a duck in English, they were more likely to fixate on a picture of a shovel than on other competitors in the display. ...
... It may be more surprising, however, to find out that a lower proficiency language can influence how we process a higher proficiency language. Indeed, several studies utilizing different methodologies, including eye-tracking, have shown that a less proficient nontarget language can be activated in a monolingual dominant-language context (e.g., Lagrou et al., 2013;Lemhöfer et al., 2018;Shook & Marian, 2017). In other words, bilinguals can activate both dominant and nondominant languages even when they are not in use. ...
Chapter
Full-text available
Bilingual Lexical Ambiguity Resolution - edited by Roberto R. Heredia January 2020
... Within the bilingual system, connections are established through the simultaneous activations of related words (Kroll et al., 2010). As a result, starting from a young age, bilinguals often find themselves considering both within-and betweenlanguage competitors when selecting a word (Arredondo et al., 2019;Shook & Marian, 2019). Our new bilingual findings demonstrate that these bilingual connections extend to sub-lexical components, namely lexical morphology. ...
Article
How do early bilingual experiences influence children's neural architecture for word processing? Dual language acquisition can yield common influences that may be shared across different bilingual groups, as well as language‐specific influences stemming from a given language pairing. To investigate these effects, we examined bilingual English speakers of Chinese or Spanish, and English monolinguals, all raised in the US (N = 152, ages 5–10). Children completed an English morphological word processing task during fNIRS neuroimaging. The findings revealed both language‐specific and shared bilingual effects. The language‐specific effects were that Chinese and Spanish bilinguals showed principled differences in their neural organization for English lexical morphology. The common bilingual effects shared by the two groups were that in both bilingual groups, increased home language proficiency was associated with stronger left superior temporal gyrus (STG) activation when processing the English word structures that are most dissimilar from the home language. The findings inform theories of language and brain development during the key periods of neural reorganization for learning to read by illuminating experience‐based plasticity in linguistically diverse learners. This article is protected by copyright. All rights reserved
... Within the bilingual system, connections are established through the simultaneous activations of related words (Kroll, 2010). As a result, starting from a young age, bilinguals often find themselves considering both within-and between-language competitors when selecting a word (Arredondo et al., 2019;Shook & Marian, 2019). Our new bilingual findings demonstrate that these bilingual connections extend to sub-lexical components, namely lexical morphology. ...
Preprint
How do early bilingual experiences influence children’s neural architecture for word processing? Dual language acquisition can yield universal influences that generalize across bilinguals, as well as language-specific influences stemming from a given language pairing. To investigate these effects, we examined bilingual English speakers of Chinese or Spanish, and English monolinguals, all raised in the US (N = 152, ages 5-10). Children completed an English morphological word processing task during fNIRS neuroimaging. The findings revealed both language-specific and universal bilingual effects. The language-specific effects were that Chinese and Spanish bilinguals showed principled differences in their neural organization for English lexical morphology. The universal bilingual effects were that in both bilingual groups, increased home language proficiency was associated with stronger left superior temporal gyrus (STG) activation when processing the word structures that are most dissimilar from English. The findings inform theories of language and brain development by illuminating experience-based plasticity of the developing brain in children with diverse linguistic experiences.
... However, electrophysiological results suggest that L1-dominant bilinguals can implement different neurocognitive mechanisms in the use of L1 and L2 phonological information when learning novel words in an L3. The present findings are in line with previous studies demonstrating that bilinguals seem to access and activate their unused language during speech comprehension (FitzPatrick and Indefrey, 2014;Shook and Marian, 2019). Our results suggest that L1-dominant bilinguals can make use of both of their languages to learn L3 novel words, even when one of them is not explicitly present in the learning situation. ...
Article
This study investigated the influence of phonological word representations from both first language (L1) and second language (L2) on third language (L3) lexical learning in L1-dominant Spanish–English bilinguals. More specifically, we used event-related potentials (ERPs) to determine whether L1 Spanish and L2 English phonology modulates bilinguals’ brain response to newly learned L3 Slovak words, some of which had substantial phonological overlap with either L1 or L2 words (interlingual homophones) in comparison to matched control words with little or no phonological overlap. ERPs were recorded from a group of 20 Spanish–English bilinguals in response to 120 auditory Slovak words, both before and after a three-day-long learning period during which they associated the L3 Slovak novel words with their L1 Spanish translations. Behaviorally, both L1 Spanish and L2 English homophony facilitated the learning of L3 Slovak words in a similar manner. In contrast, the electrophysiological results of the post-training ERPs, but not the pre-training ERPs, showed an N100 effect for L2 English interlingual homophones and opposite N400 effects for L1 Spanish and L2 English interlingual homophones in comparison to control words. These findings suggest different neurocognitive mechanisms in the use of L1 and L2 phonological information when learning novel words in an L3.
... Although, we used the dictation task instead of translation and the blocked design instead of intermixing languages across trials to avoid the direct activation of the non-intended language, it could still be argued that the use of both languages in the same experimental session could have enhanced language co-activation. However, recent studies have shown that language co-activation occurs even under very stringent single-language contexts, and even when language use is limited to the dominant language (Shook & Marian, 2019;Bobb, Von Holzen, Mayor, Mani & Carreiras, 2020). In the following subsections, we will discuss the evidence regarding language coactivation, the time course of lexical and sublexical activation, and finally, some issues regarding language differences. ...
Article
Bilinguals’ two languages seem to be coactivated in parallel during reading, speaking, and listening. However, this coactivation in writing has been scarcely studied. This study aimed to assess orthographic coactivation during spelling-to-dictation. We took advantage of the presence of polyvalent graphemes in Spanish (one phonological representation with two orthographic specifications, e.g., / b /for both the graphemes v and b) to manipulate orthographic congruency. Spanish–English bilinguals were presented with cross-linguistic congruent (mo v ement–mo v imiento) and incongruent words (go v ernment–go b ierno) for a dictation task. The time and accuracy to initiate writing and to type the rest-of-word (lexical and sublexical processing) were recorded in both the native language (L1) and the second language (L2). Results revealed no differences between conditions in monolinguals. Bilinguals showed a congruency and language interaction with better performance for congruent stimuli, which was evident from the beginning of typing in L2. Language coactivation and lexical–sublexical interaction during bilinguals’ writing are discussed.
... Interestingly, although the linguistic level of overlap in stimuli (phonetic vs. phonemic) meant to probe cross-linguistic coactivation is important to both theory-building and experimental design, it has not received much attention in the empirical literature. Most studies report the number of overlapping phonemes Canseco-Gonzalez et al., 2010;Ju & Luce, 2004;Marian & Spivey, 2003a, 2003bShook & Marian, 2017), and/or phonetic features Canseco-Gonzalez et al., 2010;Marian & Spivey, 2003b), to quantify the overlap and to keep the amount of overlap constant between experimental conditions. However, the precise way in which overlap is defined, and its extent may fundamentally dictate the extent of crosslinguistic activation observed. ...
Article
Activation of both of a bilingual’s languages during auditory word recognition has been widely documented. Here, we argue that if parallel activation in bilinguals is the result of a bottom-up process where phonetic features that overlap across two languages activate both linguistic systems, then the robustness of such parallel activation is in fact surprising. This is because phonemes across two different languages are rarely perfectly matched to each other in phonetic features. For instance, across Spanish and English, a “voiced” stop is realized in phonetically-distinct ways, and therefore, words that begin with voiced stops in English do not in fact fully overlap in phonetic features with words in Spanish. In two eye-tracking experiments using a visual world paradigm, we examined the effect of a phonemic match (English /b/ matched to Spanish /b/) vs. a phonetic match (English /b/ matched to Spanish /p/) on cross-linguistic co-activation (English words co-activating Spanish) in Spanish L1 and in Spanish L2 speakers. We found that while phonemic matching induced co-activation in both Spanish L1 and Spanish L2 speakers, phonetic matching did not. Together, these results indicate that co-activation of two languages in bilinguals may proceed through activation of categorical phonemic information rather than through activation of phonetic features.
Article
Many languages use the same letters to represent different sounds (e.g., the letter P represents /p/ in English but /r/ in Russian). We report two experiments that examine how native language experience impacts the acquisition and processing of words with conflicting letter-to-sound mappings. Experiment 1 revealed that individual differences in nonverbal intelligence predicted word learning and that novel words with conflicting orthography-to-phonology mappings were harder to learn when their spelling was more typical of the native language than less typical (due to increased competition from the native language). Notably, Experiment 2 used eye tracking to reveal, for the first time, that hearing non-native spoken words activates native language orthography and both native and non-native letter-to-sound mappings. These findings evince high interactivity in the language system, illustrate the role of orthography in phonological learning and processing, and demonstrate that experience with written form changes the linguistic mind.
Article
Full-text available
It is well established that access to the bilingual lexicon is non-selective: even in an entirely monolingual context, elements of the non-target language are active. Research has also shown that activation of the non-target language is greater at higher proficiency levels, suggesting that it may be proficiency that drives cross-language lexical activation. At the same time, the potential role of age of acquisition (AoA) in cross-language activation has gone largely unexplored, as most studies have either focused on adult L2 learners or have conflated AoA with L2 proficiency. The present study examines the roles of AoA and L2 proficiency in L2 lexical processing using the visual world paradigm. Participants were a group of early L1 Afrikaans–L2 English bilinguals (AoA 1–9 years) and a control group of L1 English speakers. Importantly, in the bilingual group, AoA and proficiency were not correlated. In the task, participants viewed a screen with four objects on it: a target object, a competitor object whose Afrikaans translation overlapped phonetically with the target object, and two unrelated distractor objects. The results show that the L2 English group was significantly more likely to look at the cross-language competitor than the L1 English group, thus providing evidence of cross-language activation. Importantly, the extent to which this activation occurred was modulated by both L2 proficiency and AoA. These findings suggest that while these two variables may have been confounded in previous research, they actually both exert effects on cross-language activation. The locus of this parallel activation effect is discussed in terms of connectionist models of bilingualism.
Article
Substantial research among bilingual adults indicates that exposure to words primes other semantically related words within and across languages, as well as the direct translation equivalents [e.g. Chen and Ng 1989 Chen, H.-C. , and M.-L.Ng . 1989. “Semantic Facilitation and Translation Priming Effects in Chinese-English Bilinguals.” Memory & Cognition 17: 454–462.[Crossref], [PubMed], [Web of Science ®] , [Google Scholar]. “Semantic Facilitation and Translation Priming Effects in Chinese-English Bilinguals.” Memory & Cognition 17: 454–462]. However, there is less research on semantic and translation priming among bilingual children. The purpose of this study was to evaluate semantic priming effects as an indicator of underlying lexical quality among Spanish-speaking dual language learners (DLLs) in the U.S., including examination of whether semantic and translation priming effects were related to children’s reading-related skills. Ninety-five Spanish-speaking DLLs in second and fourth grade completed an eye-tracking semantic priming task along with measures of English and Spanish reading-related skills. Results indicated that there were consistent translation priming effects, with observed translation priming stronger from English to Spanish than from Spanish to English. Additionally, there were consistent within-English semantic priming effects. Results suggested that semantic priming effects were stronger for children with higher levels of English vocabulary and reading comprehension than they were for children with lower levels of English vocabulary and reading comprehension. Findings are discussed in the context of theoretical models of bilingual language processing, as well as the lexical quality hypothesis [e.g. Perfetti 2007 Perfetti, C. 2007. “Reading Ability: Lexical Quality to Comprehension.” Scientific Studies of Reading 11: 357–383.[Taylor & Francis Online], [Web of Science ®] , [Google Scholar]. “Reading Ability: Lexical Quality to Comprehension.” Scientific Studies of Reading 11: 357–383].
Article
In adult bilinguals, a word in one language will activate a related word in the other language, with language dominance modulating the direction of these effects. To determine whether the early bilingual lexicon possesses similar properties to its adult counterpart, two experiments compared translation equivalent priming and cross-linguistic semantic priming in 27-month-old bilingual toddlers learning English and one other language. Priming effects were found in both experiments, irrespective of language dominance and distance between the child’s two languages. The time course of target word recognition revealed a similar pattern for translation equivalent priming and cross-language semantic priming. These results suggest that the early bilingual lexicon possesses properties similar to the adult one in terms of word to concept connections. However, the absence of an advantage of translation equivalent priming over semantic priming, and the lack of dominance and language distance effects, suggest that when two languages are acquired in parallel during infancy, their integration within a single dynamic system is highly robust to input variations.
Article
Full-text available
Cambridge Core - Cognition - Bilingual Lexical Ambiguity Resolution - edited by Roberto R. Heredia
Article
Bilinguals’ two languages are both active in parallel, and controlling co-activation is one of bilinguals’ principle challenges. Trilingualism multiplies this challenge. To investigate how third language (L3) learners manage interference between languages, Spanish-English bilinguals were taught an artificial language that conflicted with English and Spanish letter-sound mappings. Interference from existing languages was higher for L3 words that were similar to L1 or L2 words, but this interference decreased over time. After mastering the L3, learners continued to experience competition from their other languages. Notably, spoken L3 words activated orthography in all three languages, causing participants to experience cross-linguistic orthographic competition in the absence of phonological overlap. Results indicate that L3 learners are able to control between-language interference from the L1 and L2. We conclude that while the transition from two languages to three presents additional challenges, bilinguals are able to successfully manage competition between languages in this new context.
Article
Full-text available
This study investigates cross-language and cross-modal activation in bimodal bilinguals. Two groups of hearing bimodal bilinguals, natives (Experiment 1) and late learners (Experiment 2), for whom spoken Spanish is their dominant language and Spanish Sign Language (LSE) their non-dominant language, performed a monolingual semantic decision task with word pairs heard in Spanish. Half of the word pairs had phonologically related signed translations in LSE. The results showed that bimodal bilinguals were faster at judging semantically related words when the equivalent signed translations were phonologically related while they were slower judging semantically unrelated word pairs when the LSE translations were phonologically related. In contrast, monolingual controls with no knowledge of LSE did not show any of these effects. The results indicate cross-language and cross-modal activation of the non-dominant language in hearing bimodal bilinguals, irrespective of the age of acquisition of the signed language.
Article
Full-text available
Using a variant of the visual world eye tracking paradigm, we examined if language non- selective activation of translation equivalents leads to attention capture and distraction in a visual task in bilinguals. High and low proficient Hindi-English speaking bilinguals were instructed to programme a saccade towards a line drawing which changed colour among other distractor objects. A spoken word, irrelevant to the main task, was presented before the colour change. On critical trials, one of the line drawings was a phonologically related word of the translation equivalent of the spoken word. Results showed that saccade latency was significantly higher towards the target in the presence of this cross-linguistic translation competitor compared to when the display contained completely unrelated objects. Participants were also slower when the display contained the referent of the spoken word among the distractors. However, the bilingual groups did not differ with regard to the interference effect observed. These findings suggest that spoken words activates translation equivalent which bias attention leading to interference in goal directed action in the visual domain.
Article
Full-text available
We investigated whether speaking in one language affects cross-and within-language activation when subsequently switching to a task performed in the same or different language. English–French bilinguals (L1 English, n = 29; L1 French, n = 28) were randomly assigned to a prior language context condition consisting of a spontaneous production task in English (the no-switch group) or in French (the switch group). Participants then performed an English spoken language comprehension task using the visual world method. The key result was that the switch group showed less evidence of cross-language competition than the no-switch group, consistent with the notion of an active inhibition of a prior language in the switch group. These data suggest that proficient bilinguals can globally suppress a non-target language, whether it is L1 or L2, though doing so requires cognitive resources that may be diverted from other demands, such as controlling within-language competition. Bilinguals occasionally experience thinking that the spoken words they hear belong to one known language when in fact they belong to another. Such confusions likely arise because bilinguals simultaneously activate multiple languages during spoken word recognition (e.g.
Article
Full-text available
Lexical access was examined in English–Spanish bilinguals by monitoring eye fixations on target and lexical competitors as participants followed spoken instructions in English to click on one of the objects presented on a computer (e.g., ‘Click on the beans’). Within-language lexical competitors had a phoneme onset in English that was shared with the target (e.g., ‘beetle’). Between-language lexical competitors had a phoneme onset in Spanish that was shared with the target (‘bigote’, ‘mustache’ in English). Participant groups varied in their age-of-acquisition of English and Spanish, and were examined in one of three language modes (Grosjean, 1998, 2001). A strong within-language (English) lexical competition (or cohort effect) was modulated by language mode and age of second language acquisition. A weaker between-language (Spanish) cohort effect was influenced primarily by the age-of-acquisition of Spanish. These results highlight the role of age-of-acquisition and mode in language processing. They are discussed in comparison to previous studies addressing the role of these two variables and in terms of existing models of bilingual word recognition.
Data
Full-text available
Timed picture naming was compared in seven languages that vary along dimensions known to affect lexical access. Analyses over items focused on factors that determine cross-language universals and cross-language disparities. With regard to universals, number of alternative names had large effects on reaction time within and across languages after target-name agreement was controlled, suggesting inhibitory effects from lexical competitors. For all the languages, word frequency and goodness of depiction had large effects, but objective picture complexity did not. Effects of word structure variables (length, syllable structure, compounding, and initial frication) varied markedly over languages. Strong cross-language correlations were found in naming latencies, frequency, and length. Other-language frequency effects were observed (e.g., Chinese frequencies predicting Spanish reaction times) even after within-language effects were controlled (e.g., Spanish frequencies predicting Spanish reaction times). These surprising cross-language correlations challenge widely held assumptions about the lexical locus of length and frequency effects, suggesting instead that they may (at least in part) reflect familiarity and accessibility at a conceptual level that is shared over languages.
Article
Full-text available
Recent studies have shown that word frequency estimates obtained from films and television subtitles are better to predict performance in word recognition experiments than the traditional word frequency estimates based on books and newspapers. In this study, we present a subtitle-based word frequency list for Spanish, one of the most widely spoken languages. The subtitle frequencies are based on a corpus of 41M words taken from contemporary movies and TV series (screened between 1990 and 2009). In addition, the frequencies have been validated by correlating them with the RTs from two megastudies involving 2,764 words each (lexical decision and word naming tasks). The subtitle frequencies explained 6% more of the variance than the existing written frequencies in lexical decision, and 2% extra in word naming. Word frequency, together with age of acquisition, is considered to be the most important variable in word comprehension and production: Words encountered often in life are processed more efficiently than words rarely encountered. Any study involving the perception or the production of words, be they on healthy individuals or on clinical samples (aphasia, Alzheimer's dementia, dyslexia, etc.), have to consider this variable. Therefore, researchers require good dictionaries that allow them to select words according to their frequency. Any language without a good word frequency measure is seriously disadvantaged when it comes to psycholinguistic research.
Article
Full-text available
During spoken word-recognition, bilinguals have been shown to access their two languages simultaneously. The present study examined effects of language proficiency and lexical status on parallel language activation. Language proficiency was manipulated by testing German-native and English-native bilingual speakers of German and English. Lexical status was manipulated by presenting target words that either overlapped in form across translation equivalents (cognate words) or did not overlap in form across translation equivalents (English-specific words). Participants identified targets (such as hen) from picture-displays that also included similar-sounding German competitor words (such as Hemd, shirt). Eye-movements to German competitors were used to index co-activation of German. Results showed that both bilingual groups co-activated German while processing cognate targets; however, only German-native bilinguals co-activated German while processing English-specific targets. These findings indicate that high language proficiency and cognate status boost parallel language activation in bilinguals.
Article
Full-text available
Masked translation priming between languages with different scripts exhibits a marked asymmetry in lexical decision, with much stronger priming from L1 to L2 than from L2 to L1. This finding was confirmed in a lexical decision task with Chinese–English bilinguals who were late learners of English. Following a suggestion made by Bradley (1991), the experiment was repeated using a speeded episodic recognition task. Participants studied Chinese words, and then were tested in an old/new classification task in which Chinese target words were primed by masked English translation equivalents. Significant priming was obtained for old items, not for new items. However, no priming was obtained when lexical decision was used. Unexpectedly, the episodic task showed a reverse asymmetry, since L1–L2 priming was not obtained with this task, although strong effects were obtained for lexical decision. A possible explanation for this pattern of results is that knowledge of L2 lexical items is represented episodically for late learners.
Article
Full-text available
In this study, we investigated automatic translation from English to Chinese and subsequent morphological decomposition of translated Chinese compounds. In two lexical decision tasks, Chinese-English bilinguals responded to English target words that were preceded by masked unrelated primes presented for 59 ms. Unbeknownst to participants, the Chinese translations of the words in each critical pair consisted of a fully opaque compound word (i.e., a compound with two constituent morphemes that were semantically unrelated to the compound) and a monomorphemic word that was either the first or the second morpheme of the compound. The data revealed that bilinguals responded faster to English word pairs whose Chinese translations repeated the first morpheme than to English word pairs whose Chinese translations did not repeat the first morpheme, but no effect of hidden second-morpheme repetition was found. This effect of hidden first-morpheme repetition suggests that participants translated English words to Chinese and decomposed the translated compounds into their constituent morphemes. Because the primes were presented for only 59 ms, translation and morphological decomposition must be fast and automatic.
Article
Full-text available
How do the two languages of bilingual individuals interact in everyday communication? Numerous behavioral- and event-related brain potential studies have suggested that information from the non-target language is spontaneously accessed when bilinguals read, listen, or speak in a given language. While this finding is consistent with predictions of current models of bilingual processing, most paradigms used so far have mixed the two languages by using language ambiguous stimuli (e.g., cognates or interlingual homographs) or explicitly engaging the two languages because of experimental task requirements (e.g., word translation or language selection). These paradigms will have yielded different language processing contexts, the effect of which has seldom been taken into consideration. We propose that future studies should test the effect of language context on cross-language interactions in a systematic way, by controlling and manipulating the extent to which the experiment implicitly or explicitly prompts activation of the two languages.
Article
Full-text available
Bilingual individuals have been shown to access their native language while reading in or listening to their other language. However, it is unknown what type of mental representation (e.g., sound or spelling) they retrieve. Here, using event-related brain potentials, we demonstrate unconscious access to the sound form of Chinese words when advanced Chinese-English bilinguals read or listen to English words. Participants were asked to decide whether or not English words presented in pairs were related in meaning; they were unaware of the fact that some of the unrelated word pairs concealed either a sound or a spelling repetition in their Chinese translations. Whereas spelling repetition in Chinese translations had no effect, concealed sound repetition significantly modulated event-related brain potentials. These results suggest that processing second language activates the sound, but not the spelling, of native language translations.
Article
Full-text available
Word frequency is the most important variable in research on word processing and memory. Yet, the main criterion for selecting word frequency norms has been the availability of the measure, rather than its quality. As a result, much research is still based on the old Kucera and Francis frequency norms. By using the lexical decision times of recently published megastudies, we show how bad this measure is and what must be done to improve it. In particular, we investigated the size of the corpus, the language register on which the corpus is based, and the definition of the frequency measure. We observed that corpus size is of practical importance for small sizes (depending on the frequency of the word), but not for sizes above 16-30 million words. As for the language register, we found that frequencies based on television and film subtitles are better than frequencies based on written sources, certainly for the monosyllabic and bisyllabic words used in psycholinguistic research. Finally, we found that lemma frequencies are not superior to word form frequencies in English and that a measure of contextual diversity is better than a measure based on raw frequency of occurrence. Part of the superiority of the latter is due to the words that are frequently used as names. Assembling a new frequency norm on the basis of these considerations turned out to predict word processing times much better than did the existing norms (including Kucera & Francis and Celex). The new SUBTL frequency norms from the SUBTLEX(US) corpus are freely available for research purposes from http://brm.psychonomic-journals.org/content/supplemental, as well as from the University of Ghent and Lexique Web sites.
Article
Full-text available
The present study investigated cross-language priming effects with unique noncognate translation pairs. Unbalanced Dutch (first language [L1])-English (second language [L2]) bilinguals performed a lexical decision task in a masked priming paradigm. The results of two experiments showed significant translation priming from L1 to L2 (meisje-girl) and from L2 to L1 (girl-meisje), using two different stimulus onset asynchronies (SOAs) (250 and 100 msec). Although translation priming from L1 to L2 was significantly stronger than priming from L2 to L1, the latter was significant as well. Two further experiments with the same word targets showed significant cross-language semantic priming in both directions (jongen [boy]-girl; boy-meisje [girl]) and for both SOAs. These data suggest that L1 and L2 are represented by means of a similar lexico-semantic architecture in which L2 words are also able to rapidly activate semantic information, although to a lesser extent than L1 words are able to. This is consistent with models assuming quantitative rather than qualitative differences between L1 and L2 representations.
Article
Full-text available
Measuring event-related potentials (ERPs) has been fundamental to our understanding of how language is encoded in the brain. One particular ERP response, the N400 response, has been especially influential as an index of lexical and semantic processing. However, there remains a lack of consensus on the interpretation of this component. Resolving this issue has important consequences for neural models of language comprehension. Here we show that evidence bearing on where the N400 response is generated provides key insights into what it reflects. A neuroanatomical model of semantic processing is used as a guide to interpret the pattern of activated regions in functional MRI, magnetoencephalography and intracranial recordings that are associated with contextual semantic manipulations that lead to N400 effects.
Article
Full-text available
Hebrew-English cognates (translations similar in meaning and form) and noncognates (translations similar in meaning only) were examined in masked translation priming. Enhanced priming for cognates was found with L1 (dominant language) primes, but unlike previous results, it was not found with L2 (nondominant language) primes. Priming was also obtained for noncognates, whereas previous studies showed unstable effects for such stimuli. The authors interpret the results in a dual-lexicon model by suggesting that (a) both orthographic and phonological overlap are needed to establish shared lexical entries for cognates (and hence also symmetric cognate priming), and (b) script differences facilitate rapid access by providing a cue to the lexical processor that directs access to the proper lexicon, thus producing stable noncognate priming. The asymmetrical cognate effect obtained with different scripts may be attributed to an overreliance on phonology in L2 reading.
Article
Full-text available
Two experiments explore the activation of semantic information during spoken word recognition. Experiment 1 shows that as the name of an object unfolds (e.g., lock), eye movements are drawn to pictorial representations of both the named object and semantically related objects (e.g., key). Experiment 2 shows that objects semantically related to an uttered word's onset competitors become active enough to draw visual attention (e.g., if the uttered word is logs, participants fixate on key because of partial activation of lock), despite that the onset competitor itself is not present in the visual display. Together, these experiments provide detailed information about the activation of semantic information associated with a spoken word and its phonological competitors and demonstrate that transient semantic activation is sufficient to impact visual attention.
Article
Full-text available
Whether the native language of bilingual individuals is active during second-language comprehension is the subject of lively debate. Studies of bilingualism have often used a mix of first- and second-language words, thereby creating an artificial “dual-language” context. Here, using event-related brain potentials, we demonstrate implicit access to the first language when bilinguals read words exclusively in their second language. Chinese–English bilinguals were required to decide whether English words presented in pairs were related in meaning or not; they were unaware of the fact that half of the words concealed a character repetition when translated into Chinese. Whereas the hidden factor failed to affect behavioral performance, it significantly modulated brain potentials in the expected direction, establishing that English words were automatically and unconsciously translated into Chinese. Critically, the same modulation was found in Chinese monolinguals reading the same words in Chinese, i.e., when Chinese character repetition was evident. Finally, we replicated this pattern of results in the auditory modality by using a listening comprehension task. These findings demonstrate that native-language activation is an unconscious correlate of second-language comprehension. • bilingualism • event-related potentials • language access • semantic priming • unconscious priming
Article
Full-text available
To develop a reliable and valid questionnaire of bilingual language status with predictable relationships between self-reported and behavioral measures. In Study 1, the internal validity of the Language Experience and Proficiency Questionnaire (LEAP-Q) was established on the basis of self-reported data from 52 multilingual adult participants. In Study 2, criterion-based validity was established on the basis of standardized language tests and self-reported measures from 50 adult Spanish-English bilinguals. Reliability and validity of the questionnaire were established on healthy adults whose literacy levels were equivalent to that of someone with a high school education or higher. Factor analyses revealed consistent factors across both studies and suggested that the LEAP-Q was internally valid. Multiple regression and correlation analyses established criterion-based validity and suggested that self-reports were reliable indicators of language performance. Self-reported reading proficiency was a more accurate predictor of first-language performance, and self-reported speaking proficiency was a more accurate predictor of second-language performance. Although global measures of self-reported proficiency were generally predictive of language ability, deriving a precise estimate of performance on a particular task required that specific aspects of language history be taken into account. The LEAP-Q is a valid, reliable, and efficient tool for assessing the language profiles of multilingual, neurologically intact adult populations in research settings.
Article
Full-text available
The authors investigated semantic neighborhood density effects on visual word processing to examine the dynamics of activation and competition among semantic representations. Experiment 1 validated feature-based semantic representations as a basis for computing semantic neighborhood density and suggested that near and distant neighbors have opposite effects on word processing. Experiment 2 confirmed these results: Word processing was slower for dense near neighborhoods and faster for dense distant neighborhoods. Analysis of a computational model showed that attractor dynamics can produce this pattern of neighborhood effects. The authors argue for reconsideration of traditional models of neighborhood effects in terms of attractor dynamics, which allow both inhibitory and facilitative effects to emerge.
Article
Most models of lexical access assume that bilingual speakers activate their two languages even when they are in a context in which only one language is used. A critical piece of evidence used to support this notion is the observation that a given word automatically activates its translation equivalent in the other language. Here, we argue that these findings are compatible with a different account, in which bilinguals “carry over” the structure of their native language to the non-native language during learning, and where there is no activation of translation equivalents. To demonstrate this, we describe a model in which language learning involves mapping native language phonological relationships to the non-native language, and we show how it can explain the results attributed to automatic activation of translation equivalents.
Article
Language and vision are highly interactive. Here we show that people activate language when they perceive the visual world, and that this language information impacts how speakers of different languages focus their attention. For example, when searching for an item (e.g., clock) in the same visual display, English and Spanish speakers look at different objects. Whereas English speakers searching for the clock also look at a cloud, Spanish speakers searching for the clock also look at a gift, because the Spanish names for gift (regalo) and clock (reloj) overlap phonologically. These different looking patterns emerge despite an absence of direct language input, showing that linguistic information is automatically activated by visual scene processing. We conclude that the varying linguistic information available to speakers of different languages affects visual perception, leading to differences in how the visual world is processed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Book
Growth Curve Analysis and Visualization Using R provides a practical, easy-to-understand guide to carrying out multilevel regression/growth curve analysis (GCA) of time course or longitudinal data in the behavioral sciences, particularly cognitive science, cognitive neuroscience, and psychology. With a minimum of statistical theory and technical jargon, the author focuses on the concrete issue of applying GCA to behavioral science data and individual differences. http://www.crcpress.com/product/isbn/9781466584327 http://www.danmirman.org/gca
Article
Performance of bilingual Russian–English speakers and monolingual English speakers during auditory processing of competing lexical items was examined using eye tracking. Results revealed that both bilinguals and monolinguals experienced competition from English lexical items overlapping phonetically with an English target item (e.g., spear and speaker). However, only bilingual speakers experienced competition from Russian competitor items overlapping crosslinguistically with an English target (e.g., spear and spichki, Russian for matches). English monolinguals treated the Russian competitors as they did any other filler items. This difference in performance between bilinguals and monolinguals tested with exactly the same sets of stimuli suggests that eye movements to a crosslinguistic competitor are due to activation of the other language and to between-language competition rather than being an artifact of stimulus selection or experimental design.
Article
During speech comprehension, bilinguals co-activate both of their languages, resulting in cross-linguistic interaction at various levels of processing. This interaction has important consequences for both the structure of the language system and the mechanisms by which the system processes spoken language. Using computational modeling, we can examine how cross-linguistic interaction affects language processing in a controlled, simulated environment. Here we present a connectionist model of bilingual language processing, the Bilingual Language Interaction Network for Comprehension of Speech (BLINCS), wherein interconnected levels of processing are created using dynamic, self-organizing maps. BLINCS can account for a variety of psycholinguistic phenomena, including cross-linguistic interaction at and across multiple levels of processing, cognate facilitation effects, and audio-visual integration during speech comprehension. The model also provides a way to separate two languages without requiring a global language-identification system. We conclude that BLINCS serves as a promising new model of bilingual spoken language comprehension.
Article
Language non-selective lexical access in bilinguals has been established mainly using tasks requiring explicit language processing. Here, we show that bilinguals activate native language translations even when words presented in their second language are incidentally processed in a nonverbal, visual search task. Chinese-English bilinguals searched for strings of circles or squares presented together with three English words (i.e., distracters) within a 4-item grid. In the experimental trials, all four locations were occupied by English words, including a critical word that phonologically overlapped with the Chinese word for circle or square when translated into Chinese. The eye-tracking results show that, in the experimental trials, bilinguals looked more frequently and longer at critical than control words, a pattern that was absent in English monolingual controls. We conclude that incidental word processing activates lexical representations of both languages of bilinguals, even when the task does not require explicit language processing.
Article
Three experiments are reported in which picture naming and bilingual translation were performed in the context of semantically categorized or randomized lists. In Experiments 1 and 3 picture naming and bilingual translation were slower in the categorized than randomized conditions. In Experiment 2 this category interference effect in picture naming was eliminated when picture naming alternated with word naming. Taken together, the results of the three experiments suggest that in both picture naming and bilingual translation a conceptual representation of the word or picture is used to retrieve a lexical entry in one of the speaker's languages. When conceptual activity is sufficiently great to activate a multiple set of corresponding lexical representations, interference is produced in the process of retrieving a single best lexical candidate as the name or translation. The results of Experiment 3 showed further that category interference in bilingual translation occurred only when translation was performed from the first language to the second language, suggesting that the two directions of translation engage different interlanguage connections. A model to account for the asymmetric mappings of words to concepts in bilingual memory is described. (C) 1994 Academic Press, Inc.
Article
Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name (pencil, given target panda) than on less confusable distractors (beetle, given target bottle). English listeners showed no such viewing time difference. The confusability was asymmetric: given pencil as target, panda did not distract more than distinct competitors. Distractors with Dutch names phonologically related to English target names (deksel, Ôlid,Õ given target desk) also received longer fixations than distractors with phonologically unrelated names. Again, English listeners showed no differential effect. With the materials translated into Dutch, Dutch listeners showed no activation of the English words (desk, given target deksel). The results motivate two conclusions: native phonemic categories capture second-language input even when stored representations maintain a second-language distinction; and lexical competition is greater for non-native than for native listeners.
Article
Two eye-tracking experiments examined spoken language processing in Russian-English bilinguals. The proportion of looks to objects whose names were phonologically similar to the name of a target object in either the same language (within-language competition), the other language (between-language competition), or both languages at the same time (simultaneous competition) was compared to the proportion of looks in a control condition in which no objects overlapped phonologically with the target. Results support previous findings of parallel activation of lexical items within and between languages, but suggest that the magnitude of the between-language competition effect may vary across first and second languages and may be mediated by a number of factors such as stimuli, language background, and language mode.
Article
Bilinguals have been shown to activate their two languages in parallel, and this process can often be attributed to overlap in input between the two languages. The present study examines whether two languages that do not overlap in input structure, and that have distinct phonological systems, such as American Sign Language (ASL) and English, are also activated in parallel. Hearing ASL-English bimodal bilinguals' and English monolinguals' eye-movements were recorded during a visual world paradigm, in which participants were instructed, in English, to select objects from a display. In critical trials, the target item appeared with a competing item that overlapped with the target in ASL phonology. Bimodal bilinguals looked more at competing item than at phonologically unrelated items and looked more at competing items relative to monolinguals, indicating activation of the sign-language during spoken English comprehension. The findings suggest that language co-activation is not modality specific, and provide insight into the mechanisms that may underlie cross-modal language co-activation in bimodal bilinguals, including the role that top-down and lateral connections between levels of processing may play in language comprehension.
Article
The mapping of phonetic information to lexical representations in second-language (L2) listening was examined using an eyetracking paradigm. Japanese listeners followed instructions in English to click on pictures in a display. When instructed to click on a picture of a rocket, they experienced interference when a picture of a locker was present, that is, they tended to look at the locker instead. However, when instructed to click on the locker, they were unlikely to look at the rocket. This asymmetry is consistent with a similar asymmetry previously observed in Dutch listeners’ mapping of English vowel contrasts to lexical representations. The results suggest that L2 listeners may maintain a distinction between two phonetic categories of the L2 in their lexical representations, even though their phonetic processing is incapable of delivering the perceptual discrimination required for correct mapping to the lexical distinction. At the phonetic processing level, one of the L2 categories is dominant; the present results suggest that dominance is determined by acoustic–phonetic proximity to the nearest L1 category. At the lexical processing level, representations containing this dominant category are more likely than representations containing the non-dominant category to be correctly contacted by the phonetic input.
Article
A well-known asymmetry exists in the bilingual masked priming literature in which lexical decision is used: namely, masked primes in the dominant language (L1) facilitate decision times on targets in the less dominant language (L2), but not vice versa. In semantic categorization, on the other hand, priming is symmetrical. In Experiments 1–3 we confirm this task difference, finding robust masked L2–L1 translation priming in semantic categorization but not lexical decision. In formulating an account for these findings, we begin with the assumption of a representational asymmetry between L1 and L2 lexical semantic representations, such that L1 representations are richly populated and L2 representations are not. According to this representational account, L2–L1 priming does not occur in lexical decision because an insufficient proportion of the L1 lexical semantic representation is activated by the L2 prime. In semantic categorization, we argue that the semantic information recruited to generate a decision is restricted by the task category, and that this restriction enhances the effectiveness of the L2 prime. In Experiments 4–6, these assumptions were tested in a within-language setting by pairing many-sense words (e.g., “head”) with few-sense words (e.g., “skull”). In lexical decision, robust priming was obtained in the many-to-few direction (analogous to L1–L2), but, no priming was obtained in the few-to-many direction (analogous to L2–L1) using the same word pairs. Priming in semantic categorization, on the other hand, was obtained in both directions. We propose the Sense Model as a possible account of these findings.
Article
The sounds that make up spoken words are heard in a series and must be mapped rapidly onto words in memory because their elements, unlike those of visual words, cannot simultaneously exist or persist in time. Although theories agree that the dynamics of spoken word recognition are important, they differ in how they treat the nature of the competitor set-precisely which words are activated as an auditory word form unfolds in real time. This study used eye tracking to measure the impact over time of word frequency and 2 partially overlapping competitor set definitions: onset density and neighborhood density. Time course measures revealed early and continuous effects of frequency (facilitatory) and on set based similarity (inhibitory). Neighborhood density appears to have early facilitatory effects and late inhibitory effects. The late inhibitory effects are due to differences in the temporal distribution of similarity within neighborhoods. The early facilitatory effects are due to subphonemic cues that inform the listener about word length before the entire word is heard. The results support a new conception of lexical competition neighborhoods in which recognition occurs against a background of activated competitors that changes over time based on fine-grained goodness-of-fit and competition dynamics.
Article
Deaf bilinguals for whom American Sign Language (ASL) is the first language and English is the second language judged the semantic relatedness of word pairs in English. Critically, a subset of both the semantically related and unrelated word pairs were selected such that the translations of the two English words also had related forms in ASL. Word pairs that were semantically related were judged more quickly when the form of the ASL translation was also similar whereas word pairs that were semantically unrelated were judged more slowly when the form of the ASL translation was similar. A control group of hearing bilinguals without any knowledge of ASL produced an entirely different pattern of results. Taken together, these results constitute the first demonstration that deaf readers activate the ASL translations of written words under conditions in which the translation is neither present perceptually nor required to perform the task.
Article
The time course of cross-script translation priming and repetition priming was examined in two different scripts using a combination of the masked priming paradigm with the recording of event-related potentials (ERPs). Japanese-English bilinguals performed a semantic categorization task in their second language (L2) English and in their first language (L1) Japanese. Targets were preceded by a visually presented related (translation equivalent/repeated) or unrelated prime. The results showed that the amplitudes of the N250 and N400 ERP components were significantly modulated for L2-L2 repetition priming, L1-L2 translation priming, and L1-L1 repetition priming, but not for L2-L1 translation priming. There was also evidence for priming effects in an earlier 100-200 ms time window for L1-L1 repetition priming and L1-L2 translation priming. We argue that a change in script across primes and targets provides optimal conditions for prime word processing, hence generating very fast-acting translation priming effects when primes are in L1.
Article
The present study examines language effects in second language learners. In three experiments participants monitored a stream of words for occasional probes from one semantic category and ERPs were recorded to non-probe critical items. In Experiment 1 L1 English participants who were university learners of French saw two lists of words blocked by language, one in French and one in English. We observed a large effect of language that mostly affected amplitudes of the N400 component, but starting as early as 150 ms post-stimulus onset. A similar pattern was found in Experiment 2 with L1 French and L2 English, showing that the effect is due to language dominance and not language per se. Experiment 3 found that proficient French/English bilinguals exhibited a different pattern of language effects showing that these effects are modulated by proficiency. These results lend further support to the hypothesis that word recognition during the early phases of L2 acquisition in late learners of L2 involves a specific set of mechanisms compared with recognition of L1 words.
Article
The physical energy that we refer to as a word, whether in isolation or embedded in sentences, takes its meaning from the knowledge stored in our brains through a lifetime of experience. Much empirical evidence indicates that, although this knowledge can be used fairly flexibly, it is functionally organized in 'semantic memory' along a number of dimensions, including similarity and association. Here, we review recent findings using an electrophysiological brain component, the N400, that reveal the nature and timing of semantic memory use during language comprehension. These findings show that the organization of semantic memory has an inherent impact on sentence processing. The left hemisphere, in particular, seems to capitalize on the organization of semantic memory to pre-activate the meaning of forthcoming words, even if this strategy fails at times. In addition, these electrophysiological results support a view of memory in which world knowledge is distributed across multiple, plastic-yet-structured, largely modality-specific processing areas, and in which meaning is an emergent, temporally extended process, influenced by experience, context, and the nature of the brain itself.
Article
Spoken word recognition is characterized by multiple activation of sound patterns that are consistent with the acoustic-phonetic input. Recently, an extreme form of multiple activation was observed: Bilingual listeners activated words from both languages that were consistent with the input. We explored the degree to which bilingual multiple activation may be constrained by fine-grained acoustic-phonetic information. In a head-mounted eyetracking experiment, we presented Spanish-English bilinguals with spoken Spanish words having word-initial stop consonants with either English- or Spanish-appropriate voice onset times. Participants fixated interlingual distractors (nontarget pictures whose English names shared a phonological similarity with the Spanish targets) more frequently than control distractors when the target words contained English-appropriate voice onset times. These results demonstrate that fine-grained acoustic-phonetic information and a precise match between input and representation are critical for parallel activation of two languages.
Article
Much research in bilingualism has addressed the question of the extent to which lexical information is shared between languages. The present study investigated whether syntactic information is shared by testing if syntactic priming occurs between languages. Spanish-English bilingual participants described cards to each other in a dialogue game. We found that a participant who had just heard a sentence in Spanish tended to use the same type of sentence when describing the next card in English. In particular, English passives were considerably more common following a Spanish passive than otherwise. We use the results to extend current models of the representation of grammatical information to bilinguals.
Article
This article describes a Windows program that enables users to obtain a broad range of statistics concerning the properties of word and nonword stimuli, including measures of word frequency, orthographic similarity, orthographic and phonological structure, age of acquisition, and imageability. It is designed for use by researchers in psycholinguistics, particularly those concerned with recognition of isolated words. The program computes measures of orthographic similarity on line, either with respect to a default vocabulary of 30,605 words or to a vocabulary specified by the user. In addition to providing standard orthographic neighborhood measures, the program can be used to obtain information about other forms of orthographic similarity, such as transposed-letter similarity and embedded-word similarity. It is available, free of charge, from the following Web site: http://www.maccs.mq. edu.au/colin/N-Watch/.
Article
This article describes a Windows program that enables users to obtain a broad range of statistics concerning the properties of word and nonword stimuli in Spanish, including word frequency, syllable frequency, bigram and biphone frequency, orthographic similarity, orthographic and phonological structure, concreteness, familiarity, imageability, valence, arousal, and age-of-acquisition measures. It is designed for use by researchers in psycholinguistics, particularly those concerned with recognition of isolated words. The program computes measures of orthographic similarity online, with respect to either a default vocabulary of 31,491 Spanish words or a vocabulary specified by the user. In addition to providing standard orthographic and phonological neighborhood measures, the program can be used to obtain information about other forms of orthographic similarity, such as transposed-letter similarity and embedded-word similarity. It is available, free of charge, from the following Web site: www.maccs.mq.edu.au/-colin/B-Pal.
Article
The influence of phonological similarity on bilingual language processing was examined within and across languages in three experiments. Phonological similarity was manipulated within a language by varying neighborhood density, and across languages by varying extent of cross-linguistic overlap between native and non-native languages. In Experiment 1, speed and accuracy of bilinguals' picture naming were susceptible to phonological neighborhood density in both the first and the second language. In Experiment 2, eye-movement patterns indicated that the time-course of language activation varied across phonological neighborhood densities and across native/non-native language status. In Experiment 3, speed and accuracy of bilingual performance in an auditory lexical decision task were influenced by degree of cross-linguistic phonological overlap. Together, the three experiments confirm that bilinguals are sensitive to phonological similarity within and across languages and suggest that this sensitivity is asymmetrical across native and non-native languages and varies along the timecourse of word processing.
Cross-language interaction in unimodal and bimodal bilinguals. Paper presented at the 16th Conference of the European Society for Cognitive Psychology
  • J G Van Hell
  • E Ormel
  • J Van Der Loop
  • D Hermans
Van Hell, J. G., Ormel, E., van der Loop, J., & Hermans, D. (2009). Cross-language interaction in unimodal and bimodal bilinguals. Paper presented at the 16th Conference of the European Society for Cognitive Psychology. Cracow, Poland, September 2-5.
Does lexical activation flow from word meanings to word sounds during spoken word recognition? Poster presented at the 48th annual meeting of the
  • E Yee
  • S Thompson-Schill
Yee, E., & Thompson-Schill, S. (2007). Does lexical activation flow from word meanings to word sounds during spoken word recognition? Poster presented at the 48th annual meeting of the Psychonomic Society. Long Beach, CA. https://doi.org/10.1037/e527342012-716
Standard Deviations) for Target, Competitor, and Distractor items
  • Means
Means (Standard Deviations) for Target, Competitor, and Distractor items. No differences were found between conditions (ps > 0.1).
Is syntax separate or shared between languages? Cross-linguistic syntactic priming in Spanish-English bilinguals
  • Hartsuiker
  • Yee