ArticlePDF Available

Abstract and Figures

To test the hypothesis that native language (L1) phonology can affect the lexical representations of nonnative words, a visual semantic-relatedness decision task in English was given to native speakers and nonnative speakers whose L1 was Japanese or Arabic. In the critical conditions, the word pair contained a homophone or near-homophone of a semantically associated word, where a near-homophone was defined as a phonological neighbor involving a contrast absent in the speaker’s L1 (e.g., ROCK–LOCK for native speakers of Japanese). In all participant groups, homophones elicited more false positive errors and slower processing than spelling controls. In the Japanese and Arabic groups, near-homophones also induced relatively more false positives and slower processing. The results show that, even when auditory perception is not involved, recognition of nonnative words and, by implication, their lexical representations are affected by the L1 phonology.
Content may be subject to copyright.
Brief article
The KEY to the ROCK: Near-homophony in nonnative visual
word recognition
Mitsuhiko Ota
a,*
, Robert J. Hartsuiker
b
, Sarah L. Haywood
a
a
School of Philosophy, Psychology and Language Sciences, University of Edinburgh, Dugald Stewart Building, 3 Charles Street, Edinburgh EH8 9AD, UK
b
Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, 9000 Ghent, Belgium
article info
Article history:
Received 25 July 2008
Revised 1 December 2008
Accepted 23 December 2008
Keywords:
Nonnative language phonology
Visual word recognition
Homophone
Lexical representation
Bilingualism
Arabic
Japanese
abstract
To test the hypothesis that native language (L1) phonology can affect the lexical represen-
tations of nonnative words, a visual semantic-relatedness decision task in English was
given to native speakers and nonnative speakers whose L1 was Japanese or Arabic. In
the critical conditions, the word pair contained a homophone or near-homophone of a
semantically associated word, where a near-homophone was defined as a phonological
neighbor involving a contrast absent in the speaker’s L1 (e.g., ROCK–LOCK for native speak-
ers of Japanese). In all participant groups, homophones elicited more false positive errors
and slower processing than spelling controls. In the Japanese and Arabic groups, near-
homophones also induced relatively more false positives and slower processing. The
results show that, even when auditory perception is not involved, recognition of nonnative
words and, by implication, their lexical representations are affected by the L1 phonology.
Ó2008 Elsevier B.V. All rights reserved.
1. Introduction
It is well known that late bilinguals encounter difficul-
ties in perceiving and producing the difference between
sounds in a second language (L2) that are not contrastive
in their native language (L1). The problem is most pro-
nounced when the two L2 sounds are phonetically similar
to a single phoneme in the L1 (Best, 1995; Bohn & Flege,
1992; Flege, 1995; Flege, Bohn, & Jang, 1997; Sebastián-
Gallés & Soto-Faraco, 1999). The classic example is the case
of English /l/ and /r/ for native speakers of Japanese, a
language that lacks that contrast and has just one phoneme
(/|/) that corresponds to both /l/ and /r/ (Goto, 1971;
MacKain, Best, & Strange, 1981; Mochizuki, 1981).
Recent research has begun to investigate the effects of
such L1–L2 phonemic mismatch on L2 spoken word recog-
nition. Unsurprisingly, late bilinguals exhibit indeterminacy
between L2 words that differ by a nonnative contrast. For
example, eye-tracking studies show that native Japanese
speakers tend not to resolve the difference between English
words such as rocket and locker until the second half of the
word is heard (Cutler, Weber, & Otake, 2006). In auditory
lexical decision tasks, Japanese speakers who have heard
an English word including /l/ or /r/ (e.g., light) are faster in
responding to its minimal-pair counterpart (e.g., write)
(Cutler & Otake, 2004). Similar priming effects have been
observed in native Dutch speakers processing English min-
imal pairs involving the non-Dutch contrast //–/
e
/ (e.g.,
cattle vs. kettle)(Weber & Cutler, 2004), and native Spanish
speakers processing Catalan minimal pairs involving non-
Spanish contrasts such as /e/–/
e
/, /o/–/¿/ and /s/–/z/, (Pallier,
Colomé, & Sebastián-Gallés, 2001; Sebastián-Gallés,
Echeverría, & Bosch, 2005), an effect also consistent
with ERP evidence (Sebastián-Gallés, Rodríguez-Fornells,
de Diego-Balaguer, & Díaz, 2006).
These cross-lexical effects may be products of indeter-
minate lexical representations, as suggested by researchers
mentioned in the previous paragraph. According to this
interpretation, the phonological representations of lock
vs. rock may not be completely separate in the Japanese–
English bilinguals’ mental lexicon, making the words func-
0010-0277/$ - see front matter Ó2008 Elsevier B.V. All rights reserved.
doi:10.1016/j.cognition.2008.12.007
*Corresponding author. Fax: +44 0 131 650 6883.
E-mail address: mits@ling.ed.ac.uk (M. Ota).
Cognition 111 (2009) 263–269
Contents lists available at ScienceDirect
Cognition
journal homepage: www.elsevier.com/locate/COGNIT
tionally homophonous. However, the observed effects may
also be the result of phonetic misperception of the nonna-
tive sounds. For instance, native speakers of Japanese lis-
tening to English words containing /l/ or /r/ may simply
fail to decode the relevant speech signals, and thus misper-
ceive [lak] as [rak]. In a spoken word recognition task,
such effects of prelexical misperception are difficult to sep-
arate from those of indeterminate lexical representations
because comprehension of speech materials inherently in-
volves processing of auditory input.
In order to avoid such a confound between perception
and representation, we have devised an experiment that
builds on findings from visual word recognition research.
A range of experimental evidence shows that (monolin-
gual) readers automatically access the phonological infor-
mation of orthographically presented words (see Frost,
1998, for a comprehensive review). For instance, exposure
to written words speeds up or improves the subsequent
identification of phonologically identical words (e.g., Drie-
ghe & Brysbaert, 2002; Grainger & Ferrand, 1994; Lukatela
& Turvey, 1990; Lukatela & Turvey, 1994; Perfetti & Bell,
1991). Furthermore, it has been demonstrated that access
to the meaning of visual words is mediated by the phonol-
ogy of the lexical item. When asked to judge whether a
word is a member of a particular semantic category (e.g.,
A FLOWER), participants tend to make more false positive
errors for homophones and pseudo-homophones (e.g.,
ROWS or ROWZ for ROSE) than for spelling-matched con-
trols (e.g., ROBS) (Van Orden, 1987; Van Orden, Johnston,
& Hale, 1988; Van Orden, Pennington, & Stone, 1990). Sim-
ilarly, in judging whether two visual words are semanti-
cally related, participants are less accurate and slower in
rejecting unrelated word pairs that involve homophones
(e.g., LION–BARE) or pseudohomophones (e.g., TABLE–
CHARE) than pairs involving their visual controls (e.g.,
LION–BEAN, TABLE–CHARK) (Lesch & Pollatsek, 1998;
Luo, Johnson, & Gallo, 1998). The indication is that viewing
a visual word (e.g., BARE) activates its phonological repre-
sentation (/b
e
E(r)/, which in turn activates its homophone
(bear) and causes the semantic interference.
In the experiment reported below, we exploited this
mechanism of phonological mediation in visual word rec-
ognition to examine the effects of L1 phonology on L2 lex-
ical representations without using auditory stimuli. We
hypothesized that, if a lack of contrast in the L1 renders
L2 words functionally homophonous, the same kind of
homophone effects found in monolingual visual word rec-
ognition should also be induced in nonnative speakers by
L2 minimal pairs on a nonnative contrast (e.g., LOCK and
ROCK for native speakers of Japanese). We call such words
near-homophones. The specific task we employed was the
semantic-relatedness judgment used in Luo et al. (1998).
Our reasoning predicts there to be relatively more false po-
sitive errors and slower processing for pairs involving
near-homophones (e.g., KEY–ROCK) than for those involv-
ing spelling controls (e.g., KEY–SOCK).
We tested two groups of nonnative English speakers
with different L1s, Japanese and Arabic, as well as native
speakers of English as a control group. The two L1s are
complementary with respect to the critical English phone-
mic contrasts we tested: /l/–/r/ (lacking only in Japanese)
and /p/–/b/ (lacking only in Arabic). The two L1s were cho-
sen also because neither one of them uses the Roman
alphabet as their standard script. This prevented the non-
native speakers from accessing their native language
grapheme-phoneme correspondence rules during recogni-
tion of L2 visual words, a process known to occur in bilin-
gual reading when the L1 and L2 share the same script
(Brysbaert, Van Dyck, & Van de Poel, 1999; Dijkstra, Grain-
ger, & Van Heuven, 1999; Haigh & Jared, 2007; Lemhöfer &
Dijkstra, 2004). On the other hand, by selecting L1s that do
not use Roman-alphabetic writing, we may have risked the
possibility of studying nonnative speakers who may not
access phonological information when reading L2 English
words at all. In order to check that our nonnative partici-
pants did indeed generally engage in phonological process-
ing while reading English words, we also tested their
recognition of real homophones, where genuine homo-
phone effects were expected.
Measures were taken to control for two other extrane-
ous factors. First, since having separate lexical representa-
tions for words containing a nonnative contrast is
contingent on knowing such a contrast, we excluded non-
native speakers who were incapable of performing above
chance level in a phoneme identification task involving
the critical English contrasts. Second, homophone confu-
sion errors in visual word tasks may not only reflect pho-
nological mediation but also participants’ inaccurate
orthographic-lexical knowledge (Coltheart, Patterson, &
Leahy, 1994; Starr & Fleming, 2001). To minimize the im-
pact of this factor, we tested our participants’ orthographic
knowledge of the stimulus words offline, and only included
their response to a particular word in the semantic-relat-
edness decision task if it had been correctly answered in
the off-line task.
In sum, the goal of this study was to present evidence
independent of perceptual effects that lack of L2 contrasts
in the L1 can lead to indeterminacy in phonological repre-
sentations of L2 words in the mental lexicon. We set out to
test this hypothesis in a semantic-relatedness decision task
designed after Luo et al. (1998). Our predictions were as
follows. Participants in all three groups should produce lar-
ger false positive error rates and slower reaction times for
homophones in comparison to their spelling controls. The
Japanese speakers should also produce relatively large
false positive error rates and slow reaction times for /l–r/
near-homophones. Conversely, the Arabic speakers should
produce relatively large false positive error rates and slow
reaction times for /p–b/ near-homophones.
2. Method
2.1. Participants
Participants consisted of 20 native speakers of English
(18 females and 2 males), 20 native speakers of Japanese
(16 females and 4 males), and 20 native speakers of Arabic
(8 females and 12 males). All native speakers of English, 15
of the Japanese speakers, and 13 of the Arabic speakers
were university students. On average, the Japanese speak-
ers had lived in English-speaking countries for 3;6 years
(range 1;3–11;6) and the Arabic speakers for 5;0 years
264 M. Ota et al. / Cognition 111 (2009) 263–269
(range 0;3–26;0). Eighteen of the Japanese speakers and 16
of the Arabic speakers reported using English as much or
more often than their native language on a daily basis.
There were 22 other nonnative (6 Japanese and 16 Ara-
bic) speakers who volunteered but did not participate in
the main experiment because they did not meet the inclu-
sion criterion set for a screening test, which was a two-
alternative forced choice matching task involving auditory
and visual nonsense syllables. The critical items were /la/,
/ra/, /l
e
N
k/ and /r
e
N
k/ for the /l/–/r/ contrast, and /pa/,
/ba/, /p
e
N
k/ and /b
e
N
k/ for the /p/–/b/ contrast. Each item
was auditorily presented and followed by two visually pre-
sented syllables in block letters, one that matched the
auditory syllable and one that matched the other member
of the minimal pair (e.g., <LENK> and <RENK>). The test
consisted of 32 such trials (16 for each critical contrast)
and 96 filler trials. Only participants that performed above
chance level (i.e., 11 out of 16) for both critical contrasts,
/l/–/r/ and /p/–/b/, were invited to take part in the seman-
tic-relatedness decision task.
2.2. Materials
The experimental stimuli were constructed from 20
homophone pairs (e.g., SON-SUN), 20 /l–r/ minimal pairs
(e.g., LOCK–ROCK) and 20 /p–b/ minimal pairs (e.g.,
PEACH–BEACH). A minimally different spelling control
was coupled to each pair (e.g., SOCK for LOCK–ROCK), with
the constraints that the control differed in only a single
grapheme from either member of the pair and that its pho-
nological difference from each member of the pair would
not involve a contrast missing in Japanese or Arabic. To
compensate for the large difference in orthography be-
tween some pair members, we used separate spelling con-
trols for such items (e.g., BRAKE (BRAVE)–BREAK (BREAD)).
For each contrast, the homophone or minimal pairs and the
spelling controls were approximately equated in terms of
frequency (based on the wordform frequency in the CELEX
database; Baayen, Piepenbrock, & Van Rijn, 1993) as well
as numbers of orthographic and phonological neighbors
(based on the English Lexicon Project; Balota et al., 2007).
A complete list of experimental words and their spelling
controls is given in the Appendix.
For each triplet (homophone or minimal pair and its
control), we created four word pairs for the semantic-relat-
edness decision task by combining each member of the
homophone or minimal pair with the semantic associate
of its counterpart and also by combining the spelling con-
trol with the two semantic associates. For example, from
the triplet LOCK–ROCK–SOCK we constructed LOCK–HARD
(HARD is an associate of ROCK), ROCK–KEY (KEY is an asso-
ciate of LOCK), SOCK–HARD, and SOCK–KEY. Thus, the
same semantic foil (e.g., KEY) was combined with an
experimental item (ROCK) and also with its spelling con-
trol (SOCK). These word pairs were divided into four 120
item lists, to which participants were randomly assigned.
Each participant saw only one member of each homophone
or minimal pair along with its spelling control. So for
instance, one participant may have seen KEY–ROCK (and
KEY–SOCK) but not HARD–LOCK (or HARD–SOCK).
The presentation position (i.e., left/right of the screen) of
the experimental item was counterbalanced across lists.
In addition to these critical word pairs, each participant
saw 240 filler pairs. Of these, 180 pairs were semantically
related (e.g., DOCTOR–NURSE) and the remaining 60 pairs
were unrelated (e.g., PHONE–SHEEP). Since all the 120
experimental word pairs presented to a participant were
semantically unrelated, exactly half of the complete set
of experimental and filler items each participant saw re-
quired a ‘yes’ (i.e., ‘related’) response.
2.3. Procedure
Each trial began with a fixation point presented in the
center of the screen for 1000 ms, followed by two words,
which were juxtaposed horizontally, center-aligned, and
remained on screen until the participant pressed a button.
The participants were asked to judge whether the two
words were semantically related. They responded by push-
ing the <l> (‘yes’) or <a> (‘no’) key on the keyboard. The
stimulus words were presented in an 18 point bold Arial
font. The session began with 20 practice trials.
After the semantic-relatedness decision task, partici-
pants proceeded to a lexical knowledge test involving all
of the experimental stimuli and spelling controls used in
the main task. The test was presented as an untimed,
self-paced web questionnaire. Each target appeared in bold
typeface next to three words, one of which was a near-syn-
onym of the target.
3. Results
3.1. Accuracy
Items for which participants made errors in the lexical
knowledge test were excluded (along with their matched
observations) from the error analysis of the semantic-relat-
edness decision task. These accounted for the exclusion of
9 responses (0.8%) from the native English group, 192 re-
sponses (8.0%) from the Japanese speaker group, and 520
0
5
10
15
20
25
30
35
h'phones
l-r
p-b
h'phones
l-r
p-b
h'phones
l-r
p-b
English Japanese Arabic
L1
Errors (%)
Experimental
Spelling control
Fig. 1. Mean percentage of errors (by participants). Error bars indicate +1
standard error of the mean.
M. Ota et al. / Cognition 111 (2009) 263–269 265
responses (21.6%) from the Arabic speaker group. Mean er-
ror rates based on the remaining data are shown in Fig. 1.
We first conducted a Group (English vs. Japanese vs. Ara-
bic) Contrast (homophone vs. /l–r/ vs. /p–b/) Condition
(experimental item vs. spelling control) mixed ANOVA of
the mean error rates. The analysis revealed a significant
Group Contrast Condition interaction (Table 1).
To pull apart the three-way interaction, a two-way AN-
OVA was conducted for each language group. In all three
groups, a significant Contrast Condition interaction was
found (Table 2).
A planned comparison showed that the native English
speaker group produced more errors in the homophone
condition than in the corresponding spelling control condi-
tion [t
1
(19) = 7.24, p< 0.001; t
2
(39) = 3.55, p< 0.001]. The
Japanese group produced more errors in the experimental
condition than in the corresponding spelling control condi-
tion for the homophone items [t
1
(19) = 5.85, p< 0.001;
t
2
(39) = 7.17, p< 0.001] and the /l–r/ items [t
1
(19) = 6.00,
p< 0.001; t
2
(39) = 5.52, p< 0.001]. The Arabic group pro-
duced more errors in the experimental condition than in
their spelling control condition for the homophone items
[t
1
(19) = 5.88, p< 0.001; t
2
(39) = 5.35, p< 0.001] and the
/p–b/ items [t
1
(19) = 4.03, p< 0.001; t
2
(39) = 3.98,
p< 0.001]. No difference was found between the experi-
mental items and their spelling controls in any other lan-
guage-contrast combinations [all ts < 1].
3.2. Latency
The reaction time analysis excluded observations that
were errors or outliers (>10,000 ms). This resulted in the
exclusion of 4.4% of the native English data, 12.5% of the
Japanese speakers’ data, and 11.8% of the Arabic speakers’
data. To further reduce the impact of extreme reaction
times, we used medians for each participant and item.
Summary latency data are shown in Fig. 2.
As with the error data we first conducted a
Group Contrast Condition mixed ANOVA of the med-
ian reaction times. The analysis revealed a three-way
Group Contrast Condition interaction, significant by
items and marginal by participants (Table 3).
A Contrast Condition ANOVA conducted for each lan-
guage group showed that the Contrast Condition interac-
tion was significant by participants in the English group,
marginally significant by items in the Japanese group,
and significant by items and marginal by participants in
the Arabic group (Table 4).
Planned comparisons showed that, in the English group,
homophones were rejected significantly slower than
matched control items by participants (and marginally by
items) [t
1
(19) = 2.66, p< 0.05; t
2
(39) = 1.72, p= 0.09]. In
Table 1
Three-way ANOVA results of the mean error rates.
Effect Analysis
By participants By items
df F1 df F2
Group 2, 57 7.44
***
2, 351 18, 64
***
Contrast 2, 114 13.41
***
2, 351 11, 15
***
Condition 1, 57 92.85
***
1, 351 101.80
***
Group Contrast 4, 114 7.06
***
4, 351 5.01
***
Group Condition 2, 57 19.10
***
2, 351 11.14
***
Contrast Condition 2, 114 13.41
***
2, 351 18.97
***
Group Contrast Condition 4, 114 11.06
***
4, 351 8.99
***
***
p< 0.001.
Table 2
Two-way ANOVA results of the mean error rates.
Group Effect Analysis
By participants By items
df F1 df F2
English Contrast 2, 38 12.04
***
2, 117 7.51
***
Condition 1, 19 37.21
***
1, 117 9.28
**
Contrast Condition 2, 38 25.93
***
2, 117 10.24
***
Japanese Contrast 2, 38 16.58
***
2, 117 8.47
***
Condition 1, 19 59.31
***
1, 117 61.13
***
Contrast Condition 2, 38 16.17
***
2, 117 16.76
***
Arabic Contrast 2, 38 2.25 2, 117 4.19
*
Condition 1, 19 22.65
***
1, 117 34.64
***
Contrast Condition 2, 38 9.19
***
2, 117 8.25
***
*
p< .05.
**
p< .01.
***
p< .001.
600
1000
1400
1800
2200
2600
3000
3400
h'phones
l-r
p-b
h'phones
l-r
p-b
h'phones
l-r
p-b
Englis h Japanese Arabic
L1
Reaction time (ms)
Experimental
Spelling control
Fig. 2. Mean response latencies (by participants). Error bars indicate +1
standard error of the mean.
Table 3
Three-way ANOVA results of the median reaction times.
Effect Analysis
By participants By items
df F1 df F2
Group 2, 56 27.57
***
2, 351 369.30
***
Contrast 2, 114 2.02 2, 351 6.73
***
Condition 1, 56 8.90
**
1, 351 28.57
***
Group Contrast 4, 112 2.77
*
4, 351 2.76
*
Group Condition 2, 56 1.94 2, 351 5.61
**
Contrast Condition 2, 112 3.91
*
2, 351 3.72
*
Group Contrast Condition 4, 112 2.11
±
4, 351 3.15
*
±
p< 0.10.
*
p< 0.05.
**
p< 0.01.
***
p< 0.001.
266 M. Ota et al. / Cognition 111 (2009) 263–269
the Japanese group, experimental items were rejected sig-
nificantly slower than spelling controls for homophones
[t
1
(19) = 3.06, p< 0.01; t
2
(39) = 3.10, p< 0.01] and for /l–r/
items (by items) [t
2
(39) = 2.68, p< 0.05]. In the Arabic
group, experimental items were rejected significantly
slower than their spelling controls in the homophone con-
dition [t
1
(19) = 2.32, p< 0.05; t
2
(39) = 2.49, p< 0.05], and in
the /p–b/ condition [t
1
(18) = 2.87, p< 0.01; t
2
(39) = 3.69,
p< 0.01] (One participant with 0 valid observation was ex-
cluded from the by-participants comparison for /p–b/). No
difference was found between the experimental items and
spelling controls in any other language-contrast pairs (all
ts < 1, except t
1
(19) = 1.87, p= 0.08 for the English /l–r/
contrast, t
1
(19) = 1.02, p= 0.32 for the Japanese /p–b/ con-
trast, and t
1
(19) = 1.12, p= 0.28 for the Arabic /l–r/
contrast).
4. Discussion
Our first prediction was that participants in all three
groups would produce higher error rates and slower reac-
tion times for word pairs involving real homophones than
their spelling controls. This was largely supported by the
data. These results replicate the findings of Luo et al.
(1998) and extend them to nonnative visual word recogni-
tion. In other words, phonological mediation occurs in L2
visual word recognition too.
This outcome has provided us with the empirical foun-
dation to test the other prediction we made, which was
that near-homophones would produce homophone-like ef-
fects in nonnative speakers. The data confirmed this pre-
diction too. More false positive errors and slower
reaction times were elicited by the experimental items
than their corresponding spelling controls in the /l–r/ con-
dition of the Japanese group and the /p–b/ condition of the
Arabic group. No such effect was obtained in the /p–b/ con-
dition of the Japanese group or the /l–r/ condition of the
Arabic group. This double dissociation between the Japa-
nese and Arabic group shows that homophone-like effects
are revealed exactly and only in the condition with
minimal pairs that involve a missing phonemic contrast
in the L1.
The outcomes of our experiment provide direct evi-
dence that transfer of L1 phonology can occur not only in
the perception and articulation of L2 sounds, but also in
the phonological coding of L2 lexical entries. Since the
tasks employed in the current study involved only visual
recognition, the observed cross-lexical activation cannot
be attributed to auditory misperception. Our study, there-
fore, offers support for the representational interpretation
taken by Pallier et al. (2001), Sebastián-Gallés et al.
(2005) and Cutler et al. (2006) of their L2 spoken word rec-
ognition results. The lexicon of late bilinguals indeed fails
in completely separating L2 lexical entries that involve
nonnative phonological contrasts. What is striking about
our finding is that the effects of such representational inde-
terminacy are felt even in written word recognition where
the distinction between the word forms is marked by vi-
sual information, which in principle should be accessible
to readers regardless of their inventory of early acquired
phonemic systems.
Acknowledgments
This study was sponsored by a British Academy Grant
(SF-33008) awarded to Mitsuhiko Ota and Rob Hartsuiker,
an Edinburgh University Development Trust Research Fund
Grant (EO8679) awarded to Rob Hartsuiker, and a British
Academy Postdoctoral Fellowship (PDF/2005/131)
awarded to Sarah Haywood. The authors thank Krista Ehin-
ger for research assistance, and Vic Ferreira, Marc Brysba-
ert, and two anonymous reviewers for their helpful
commentary on the paper.
Appendix List. of homophones, minimal pairs, and
spelling controls
Homophones/minimal
pairs
Spelling controls
Homophone condition
BRAKE BREAK BRAVE/BREAD
BUY BYE BOY/BEE
CELL SELL TELL
FLOUR FLOWER FLOOR/FOLDER
HEAL HEEL HELL
HEAR HERE HEIR/HIRE
MADE MAID MAZE/MAIN
MAIL MALE MALL
MEAT MEET MELT
MINOR MINER MIRROR
PEACE PIECE PENCE/NIECE
SAIL SALE SOIL/SALT
SEA SEE SET
SIGHT CITE FIGHT/BITE
SOLE SOUL SOLO/SOUP
SON SUN SIN
STEAL STEEL STEAM/STEEP
TAIL TALE TALL
WAIST WASTE WRIST/TASTE
WEAK WEEK WEAR/WEED
(continued on next page)
Table 4
Two-way ANOVA results of the median reaction times.
Group Effect Analysis
By participants By items
df F1 df F2
English Contrast 2, 38 7.11
**
2, 117 6.20
**
Condition 1, 19 0.64 1, 117 2.41
Contrast Condition 2, 38 6.28
**
2, 117 1.20
Japanese Contrast 2, 38 6.26
**
2, 117 5.28
**
Condition 1, 19 3.26
±
1, 117 17.15
***
Contrast Condition 2, 38 0.89 2, 117 3.02
±
Arabic Contrast 2, 36 1.04 2, 117 2.94
±
Condition 1, 18 5.36
*
1, 117 10.20
**
Contrast Condition 2, 36 3.00
±
2, 117 3.73
*
±
p< 0.10.
*
p< 0.05.
**
p< 0.01.
***
p< 0.001.
M. Ota et al. / Cognition 111 (2009) 263–269 267
Homophones/minimal
pairs
Spelling controls
l–r condition
CLAP CRAP CHAP
CLOUD CROWD CHORD
COLLECT CORRECT CONNECT
ELECT ERECT EJECT
LACK RACK SACK
LAG RAG TAG
LANE RAIN SANE/PAIN
LAP RAP MAP
LATE RATE MATE
LAW RAW SAW
LAY RAY DAY
LED RED BED
LIGHT RIGHT TIGHT
LIP RIP DIP
LIVER RIVER DIVER
LOAD ROAD TOAD
LOCK ROCK SOCK
LONG WRONG SONG/AMONG
LOT ROT POT
LUST RUST DUST
p–b condition
PAD BAD MAD
PAN BAN MAN
PARK BARK MARK
PAT BAT FAT
PATH BATH MATH
PAY BAY SAY
PEACH BEACH REACH
PEAK BEAK LEAK
PEAR BEAR FEAR
PEER BEER DEER
PET BET JET
PIG BIG DIG
PILL BILL HILL
PIN BIN WIN
PIT BIT WIT
POND BOND FOND
POUND BOUND SOUND
PULL BULL DULL
PUMP BUMP DUMP
PUNCH BUNCH LUNCH
References
Baayen, R. H., Piepenbrock, R., & Van Rijn, H. (1993). The CELEX lexical
database. Philadelphia: Linguistic Data Consortium.
Balota, D. A., Yap, M. J., Cortese, M. J., Hutchison, K. A., Neely, J. H., Nelson,
D. L., et al (2007). The English Lexicon Project. Behavior Research
Methods, 39, 445–459.
Best, C. T. (1995). A direct realist view of cross-language speech
perception. In W. Strange (Ed.), Speech perception and linguistic
experience. Theoretical and methodological issues (pp. 171–206).
Timonium, MD: York Press.
Bohn, O.-S., & Flege, J. (1992). The production of new and similar vowels
by adult German learners of English. Studies in Second Language
Acquisition, 14, 131–158.
Brysbaert, M., Van Dyck, G., & Van de Poel, M. (1999). Visual word
recognition in bilinguals: Evidence from masked phonological
priming. Journal of Experimental Psychology: Human Perception and
Performance, 25, 137–148.
Coltheart, V., Patterson, K., & Leahy, J. (1994). When a ROWS is a ROSE:
Phonological effects in written word comprehension. The Quarterly
Journal of Experimental Psychology, 47A, 917–955.
Cutler, A., & Otake, T. (2004). Pseudo-homophony in non-native listening.
Paper presented to the 75th meeting of the Acoustical Society of America.
New York: Acoustical Society of America.
Cutler, A., Weber, A., & Otake, T. (2006). Asymmetric mapping from
phonetic to lexical representations in second-language listening.
Journal of Phonetics, 34, 269–284.
Dijkstra, A., Grainger, J., & Van Heuven, W. J. B. (1999). Recognition of
cognates and interlingual homographs: The neglected role of
phonology. Journal of Memory and Language, 41, 496–518.
Drieghe, D., & Brysbaert, M. (2002). Strategic effects in associative priming
with words, homophones, and pseudohomophones. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 28,
951–961.
Flege, J. (1995). Second-language speech learning: Theory, findings, and
problems. In W. Strange (Ed.), Speech perception and linguistic
experience. Theoretical and methodological issues (pp. 233–273).
Timonium, MD: York Press.
Flege, J. E., Bohn, O.-S., & Jang, S. (1997). The production and perception of
English vowels by native speakers of German, Korean, Mandarin and
Spanish. Journal of Phonetics, 25, 437–470.
Frost, R. (1998). Toward a strong phonological theory of visual word
recognition: True issues and false trails. Psychological Bulletin, 123,
71–99.
Goto, H. (1971). Auditory perception by normal Japanese subjects of the
sounds ‘‘L” and ‘‘R”. Neuropsychologia, 9, 317–323.
Grainger, J., & Ferrand, L. (1994). Phonology and orthography in visual
word recognition: Effects of masked homophone primes. Journal of
Memory and Language, 33, 218–233.
Haigh, C. A., & Jared, D. (2007). The activation of phonological
representations by bilinguals while reading silently: Evidence from
interlingual homophones. Journal of Experimental Psychology:
Learning, Memory, and Cognition, 33, 623–644.
Lemhöfer, K., & Dijkstra, T. (2004). Recognizing cognates and interlingual
homographs: Effects of code similarity in language-specific and
generalized lexical decision. Memory and Cognition, 32, 533–
550.
Lesch, M. F., & Pollatsek, A. (1998). Evidence for the use of assembled
phonology in accessing the meaning of printed words. Journal of
Experimental Phonology: Learning, Memory, and Cognition, 24,
573–592.
Lukatela, G., & Turvey, M. T. (1990). Automatic and pre-lexical
computation of phonology in visual word identification. European
Journal of Cognitive Psychology, 2, 325–344.
Lukatela, G., & Turvey, M. T. (1994). Phonological access of the lexicon:
Evidence from associative priming with pseudo-homophones. Journal
of Experimental Psychology: Learning, Memory, and Cognition, 17,
951–966.
Luo, C. R., Johnson, R. A., & Gallo, D. A. (1998). Automatic activation of
phonological information in reading: Evidence from the semantic
relatedness decision task. Memory and Cognition, 26, 833–843.
MacKain, K. S., Best, C. T., & Strange, W. (1981). Categorical perception of
English /r/ and /l/ by Japanese bilinguals. Applied Psycholinguistics, 2,
369–390.
Mochizuki, M. (1981). The identification of /r/ and /l/ in natural and
synthesized speech. Journal of Phonetics, 9, 283–303.
Pallier, C., Colomé, A., & Sebastián-Gallés, N. (2001). The influence of
native-language phonology on lexical access: Exemplar-based versus
abstract lexical entries. Psychological Science, 12, 445–449.
Perfetti, C. A., & Bell, L. C. (1991). Phonemic activation during the first
40 ms of word identification: Evidence from backward masking and
priming. Journal of Memory and Language, 27, 59–70.
Sebastián-Gallés, N., Echeverría, S., & Bosch, L. (2005). The influence of
initial exposure on lexical representation: Comparing early and
simultaneous bilinguals. Journal of Memory and Language, 52,
240–255.
Sebastián-Gallés, N., Rodríguez-Fornells, A., de Diego-Balaguer, R., & Díaz,
B. (2006). First- and second-language phonological representations in
the mental lexicon. Journal of Cognitive Neuroscience, 18, 1277–
1291.
Sebastián-Gallés, N., & Soto-Faraco, S. (1999). Online processing of native
and nonnative contrasts in early bilinguals. Cognition, 72, 111–
123.
Appendix (continued)
268 M. Ota et al. / Cognition 111 (2009) 263–269
Starr, M. S., & Fleming, K. K. (2001). A rose by any other name is not the
same: The role of orthographic knowledge in homophone confusion
errors. Journal of Experimental Psychology: Learning, Memory, and
Cognition, 27, 744–760.
Van Orden, G. C. (1987). A Rows is a Rose: Spelling, sound, and reading.
Memory and Cognition, 15, 181–198.
Van Orden, G. C., Johnston, J. C., & Hale, B. L. (1988). Word identification in
reading proceeds from spelling to sound to meaning. Journal of
Experimental Psychology: Learning, Memory, and Cognition, 14,
371–386.
Van Orden, G. C., Pennington, B., & Stone, G. (1990). Word identification in
reading and the promise of subsymbolic psycholinguistics.
Psychological Review, 97, 488–522.
Weber, A., & Cutler, A. (2004). Lexical competition in non-native spoken-
word recognition. Journal of Memory and Language, 50, 1–25.
M. Ota et al. / Cognition 111 (2009) 263–269 269
... Pallier and colleagues interpreted these findings to suggest that the minimal pairs might in fact be stored lexically as homophones for some learners. Before concluding that these findings point to a lexical representation issue however, it matters to point out that in theory, another scenario could explain the results: It may be the case that the two representations exist separately and are not homophones, but one of them is never accessed by the percept, if participants do not perceive the phonetic difference between the sounds during the task (see Ota, Hartsuiker, & Haywood, 2009). This scenario would point to an access issue grounded in perception, not a representational one (although unlikely, it is conceivable that separate representations could be acquired without perceptual support, for instance, if acquired via metalinguistic or spelling information, although the lack of perceptual evidence might make this unviable in the long term.) ...
... Still, these initial findings have been supported by several subsequent studies that assessed the perceptual component separately and confirmed Pallier et al.'s interpretation: It is indeed possible that initial misperceptions of the input are mirrored in the content of lexical representations (Cutler & Otake, 2004;Ota et al., 2009;Trofimovich & John, 2011;Darcy, Dekydtspotter, Sprouse, et al., 2012). This argument was extended to areas beyond segments (e.g., phonotactics: Darcy & Thomas, 2019;lexical stress: Dupoux et al., 2008). ...
... However, there does not appear to be a generally agreed-upon definition of such "fuzziness" among researchers. At times, it is used to refer to phonetically or phonologically imprecise representations, while at other times it seems to be applied broadly to representations that are "non-target-like" in some unspecified way, for instance, Ota et al. (2009) or Cook et al. (2016. Barrios and Hayes-Harb (2021) propose a typology of possible meanings of phonolexical fuzziness for difficult L2 contrasts. ...
Preprint
Twenty-five years ago, the publication of an article by Pallier, Colomé and Sebastián-Gallés (2001) launched a new and rapidly evolving research program on how second language (L2) learners represent the phonological forms of words in their mental lexicons. Many insights are starting to form an overall picture of the unique difficulties for establishing functional and precise phonolexical representations in L2; however, in order for the field to move forward it is pertinent to outline its major emerging research questions and existing challenges. Among significant obstacles for further research, the paper explores the current lack of theoretical agreement on the concept of phonolexical representations and the underlying mechanism involved in establishing them, as well as the variable use of the related terminology (e.g. fuzziness and target-like). Methodological challenges involved in investigating phonological processing and phonolexical representations as well as their theoretical implications are also discussed. To conclude, we explore the significance of L2-specific phonological representations for bottom-up lexical access during casual, conversational speech, and how our emerging knowledge of L2 lexical representations can be applied in an instructional setting as two potentially fruitful research avenues at the forefront of the current research agenda.
... Pallier and colleagues interpreted these findings to suggest that the minimal pairs might in fact be stored lexically as homophones for some learners. Before concluding that these findings point to a lexical representation issue however, it matters to point out that in theory, another scenario could explain the results: It may be the case that the two representations exist separately and are not homophones, but one of them is never accessed by the percept, if participants do not perceive the phonetic difference between the sounds during the task (see Ota, Hartsuiker, & Haywood, 2009). This scenario would point to an access issue grounded in perception, not a representational one (although unlikely, it is conceivable that separate representations could be acquired without perceptual support, for instance, if acquired via metalinguistic or spelling information, although the lack of perceptual evidence might make this unviable in the long term.) ...
... Still, these initial findings have been supported by several subsequent studies that assessed the perceptual component separately and confirmed Pallier et al.'s interpretation: It is indeed possible that initial misperceptions of the input are mirrored in the content of lexical representations (Cutler & Otake, 2004;Ota et al., 2009;Trofimovich & John, 2011;Darcy, Dekydtspotter, Sprouse, et al., 2012). This argument was extended to areas beyond segments (e.g., phonotactics: Darcy & Thomas, 2019;lexical stress: Dupoux et al., 2008). ...
... However, there does not appear to be a generally agreed-upon definition of such "fuzziness" among researchers. At times, it is used to refer to phonetically or phonologically imprecise representations, while at other times it seems to be applied broadly to representations that are "non-target-like" in some unspecified way, for instance, Ota et al. (2009) or Cook et al. (2016. Barrios and Hayes-Harb (2021) propose a typology of possible meanings of phonolexical fuzziness for difficult L2 contrasts. ...
Article
Full-text available
Twenty-five years ago, the publication of an article by Pallier, Colomé, and Sebastián-Gallés (2001) launched a new and rapidly evolving research program on how second language (L2) learners represent the phonological forms of words in their mental lexicons. Many insights are starting to form an overall picture of the unique difficulties for establishing functional and precise phonolexical representations in L2; however, for the field to move forward it is pertinent to outline its major emerging research questions and existing challenges. Among significant obstacles for further research, the paper explores the current lack of theoretical agreement on the concept of phonolexical representations and the underlying mechanism involved in establishing them, as well as the variable use of the related terminology (e.g., fuzziness and target-likeness). Methodological challenges involved in investigating phonological processing and phonolexical representations as well as their theoretical implications are also discussed. To conclude, we explore the significance of L2-specific phonological representations for the bottom-up lexical access during casual, conversational speech and how our emerging knowledge of L2 lexical representations can be applied in an instructional setting as two potentially fruitful research avenues at the forefront of the current research agenda.
... Looking at the relationship between phonology and orthography from a different angle, Ota et al. (2009) used a semantic-relatedness task to demonstrate the effects of L1 phonology on L2 lexical representations during visual word recognition. They tested L1 Arabic, Japanese, and English speakers to see whether English near-homophones would be perceived as homophonous when differing by a nonnative contrast. ...
... However, it may be the case that the benefits of distinct script inputs are limited by various factors, such as lack of familiarity, systematicity, and perceptibility of the target phonology. In the context of confusable nonnative contrasts, Ota et al. (2009) demonstrated that even with high levels of proficiency and experience with a distinct script, L1 phonological interference persists. This is also reflected in the findings of Mok et al. (2018), where particularly poor performance with the distinct script input was reported for the confusable tone 2-tone 3 contrast. ...
Article
Full-text available
Learners of additional languages, particularly in adulthood and instructed settings, are typically exposed to large quantities of written input from the earliest stages of learning, with varied and far-reaching effects on L2 phonology. Most research investigating this topic focuses on learning across languages that share the same orthographic script, often involving the Latin alphabet and English. Without exploring phonological learning over a greater diversity of spoken and written language combinations, our understanding of orthographic effects on L2 phonology remains narrow and unrepresentative of the many individuals acquiring languages across writing systems, globally. This paper draws together preliminary research relating to the influence of written input, in a distinct script from known languages, on L2 phonology. Studies are grouped into those with naïve participants, where the written forms are entirely unfamiliar to the participant, and those with experienced learners, who have varying levels of proficiency and familiarity with the target orthog-raphy. While there is great scope and need for further investigation, initial evidence suggests that even entirely unfamiliar written input impacts phonological learning and is certainly influential with growing proficiency in the spoken and written language. The article concludes with theoretical and methodological considerations for future research in this emerging field.
... The closest sound in Japanese is /ɾ/, which both English /l/ and /ɹ/ assimilate to. Because English /l/À/r/ is a confusable contrast for L1 Japanese speakers, they are also likely to lexically encode the phonological information of these words imprecisely or in a way that reflects L1 perception (Pallier et al., 2001;Ota et al., 2009). The process of encoding phonological information into lexical representations (regardless of whether it reflects L1 or L2 perception) is referred to as phonolexical encoding. ...
Article
Full-text available
This study investigated the effect of phonological neighborhood density (PND) on the lexical encoding of perceptually confusable segmental contrasts and the extent to which the precision of encoding is modulated by phonetic categorization and vocabulary size. Korean learners of English and native speakers of American English completed an auditory lexical decision task that contained words and nonwords with /ɛ/, /æ/, /f/, and /p/ (/æ/ and /f/ do not exist in Korean), two phonetic categorization tasks (/ɛ/−/æ/ and /f/−/p/), and a vocabulary test. For the Korean group, participants’ categorization of /f/−/p/ was the only significant predictor of /f/−/p/ nonword rejection. For /ɛ/−/æ/, nonword versions of high PND words were rejected more accurately than low PND. Additionally, vocabulary size and phonetic categorization significantly interacted so that as perception abilities improve, the benefits that come from having a large vocabulary grow as well.
... However, if a learner misperceives /l/ as /r/, then they likely perceive words like light and right as homophones: /raɪt/. If minimal pairs are perceived as homophones, they will likely be stored as homophones (e.g., Ota et al., 2009). When two words are erroneously perceived/stored as homophones, this is referred to as pseudo-homophony. ...
Conference Paper
Full-text available
Cutler (2005) was one of the first attempts to quantify how misperception of second language (L2) sounds might affect word recognition. In this conceptual replication, we build on Cutler's work by replicating her analyses using the EVP-Phon database-which we designed to simulate the L2 English mental lexicon-and focus on how misperception might affect the structure of the mental lexicon. Misperceiving L2 sounds may lead learners to create lexical representations that are pseudo-homophones (e.g., if a learner cannot discriminate English /ɛ/-/ae/, then pen-pan are homophones) or pseudo-phonological neighbors (e.g., pen /pɛn/ and man /maen/ are phonological neighbors when they should not be). We quantify the number of pseudo-homophones and pseudo-neighbors in a learner's mental lexicon at each CEFR proficiency level for two misperception patterns: /ɛ/-/ae/ and /l/-/r/. This allows us to analyze how lexical encoding issues could grow as more words are added to the lexicon.
... This may also partially explain why L2 phonology is fuzzy, especially for late bilinguals (c.f. Ota et al., 2009Ota et al., , 2010 The cross-language phonological priming effect benefits from shared phonological representations at the sublexical level. The activated sublexical phonological information in primes facilitates the recognition of target in the other language. ...
Article
Full-text available
This study investigated the hypothesis of non-selective access to phonological representations in an integrated lexicon across logographic and alphabetic writing systems among Chinese L1 (first language) - English L2 (second language) bilinguals. We employed three experiments to test this hypothesis, including a lexical decision task (LDT) and a word naming task in Experiments 1 and 2 using the masked priming paradigm, and a self-paced sentence reading task in Experiment 3. Results from the LDT and word naming tasks showed a significant homophone priming effect from L1 to L2, but not from L2 to L1. In the sentence reading task, we compared processing time between homophone error words and control words in the critical and spill-over regions. A slower processing effect in the homophone condition was observed in the spill-over region. Overall, these findings suggest that phonological priming occurs across a logographic and an alphabetic script in different tasks, whether reading isolated words or sentences. Bilingual reading involves an integrated bilingual lexicon that is independent of script similarity.
Chapter
Bilingualism and the study of speech sounds are two of the largest areas of inquiry in linguistics. This Handbook sits at the intersection of these fields, providing a comprehensive overview of the most recent, cutting-edge work on the sound systems of adult and child bilinguals. Bringing together contributions from an international team of world-leading experts, it covers all aspects of the speech perception, production and processing of bilingual individuals, as well as surveying cross-linguistic influences on the phonetics and phonology of bilingualism. The thirty-five chapters are divided into thematic areas covering the theoretical foundations and methodological approaches employed to investigate bilingual speech, overviews of major findings and developments in child and adult bilingual phonology and phonetics, descriptions of the major areas of research within the speech perception, production and processing of the bilingual individual, and examinations of various predictors of cross-linguistic influence and variables affecting the outcomes of bilingual speech.
Article
Full-text available
This study compared the effect of two teaching methods for L2 phonological forms, (explicit and implicit) on L2 vocabulary acquisition. Both methods facilitated vocabulary acquisition without any significant difference. The effect of the focus on forms method, which did not include meaning instruction, was equivalent to that of the communicative method, which integrated pronunciation and perception instruction. The vocabulary learned was measured three months after the instruction of L2 phonological forms (total instruction = 4 hours; 12 lessons).
Chapter
Vocabulary learning that overlooks the recognition of single sounds in English language learning could lead to communication difficulties, such as the inability to hear and pronounce English correctly. To address this issue, the vocalization strategy, in which students listen to the sounds of words and memorize them by voicing them, is considered a useful learning strategy and has been reported to influence performance in English language learning. Many studies have shown that self-regulated learning promotes the use of this strategy. In this study, we developed an English vocabulary learning support system using dictation-type speech recognition technology and evaluated whether it promotes the use of vocalization strategies and aural vocabulary knowledge.
Chapter
Full-text available
The aim of our research is to understand how speech learning changes over the life span and to explain why "earlier is better" as far as learning to pronounce a second language (L2) is concerned. An assumption we make is that the phonetic systems used in the production and perception of vowels and consonants remain adaptiive over the life span, and that phonetic systems reorganize in response to sounds encountered in an L2 through the addition of new phonetic categories, or through the modification of old ones. The chapter is organized in the following way. Several general hypotheses concerning the cause of foreign accent in L2 speech production are summarized in the introductory section. In the next section, a model of L2 speech learning that aims to account for age-related changes in L2 pronunciation is presented. The next three sections present summaries of empirical research dealing with the production and perception of L2 vowels, word-initial consonants, and word-final consonants. The final section discusses questions of general theoretical interest, with special attention to a featural (as opposed to a segmental) level of analysis. Although nonsegmental (i.e., prosodic) dimensions are an important source of foreign accent, the present chapter focuses on phoneme-sized units of speech. Although many different languages are learned as an L2, the focus is on the acquisition of English.
Article
Full-text available
In two experiments Dutch–English bilinguals were tested with English words varying in their degree of orthographic, phonological, and semantic overlap with Dutch words. Thus, an English word target could be spelled the same as a Dutch word and/or could be a near-homophone of a Dutch word. Whether such form similarity was accompanied with semantic identity (translation equivalence) was also varied. In a progressive demasking task and a visual lexical decision task very similar results were obtained. Both tasks showed facilitatory effects of cross-linguistic orthographic and semantic similarity on response latencies to target words, but inhibitory effects of phonological overlap. A third control experiment involving English lexical decision with monolinguals indicated that these results were not due to specific characteristics of the stimulus material. The results are interpreted within an interactive activation model for monolingual and bilingual word recognition (the Bilingual Interactive Activation model) expanded with a phonological and a semantic component.
Article
Full-text available
The study reported in this paper examined the effect of second language (L2) experience on the production of L2 vowels for which acoustic counterparts are either present or absent in the first language (L1). The hypothesis being tested was that amount of L2 experience would not affect L1 German speakers' production of the “similar” English vowels /i, l, /, whereas English language experience would enable L1 Germans to produce an English-like /æ/, which has no counterpart in German. The predictions were tested in two experiments that compared the production of English /i, l, , æ/ by two groups of L1 German speakers differing in English language experience and an L1 English control group. An acoustic experiment compared the three groups for spectral and temporal characteristics of the English vowels produced in /bVt/ words. The same tokens were assessed for intelligibility in a labeling experiment. The results of both experiments were largely consistent with the hypothesis. The experienced L2 speakers did not produce the similar English vowels /i, l, / more intelligibly than the inexperienced L2 speakers, not did experience have a positive effect on approximating the English acoustic norms for these similar vowels. The intelligibility results for the new vowel /æ/ did not clearly support the model. However, the acoustic comparisons showed that the experienced but not the inexperienced L2 speakers produced the new vowel /æ/ in much the same way as the native English speakers.
Article
Pseudo-homophony may result when non-native listeners cannot distinguish phonemic contrasts. Thus Dutch listeners have difficulty distinguishing the vowels of English cattle versus kettle, because this contrast is subsumed by a single Dutch vowel category; in consequence, both words may be activated whenever either is heard. A lexical decision study in English explored this phenomenon by testing for repetition priming. The materials contained among 340 items 18 pairs such as cattle/kettle, i.e., contrasting only in those vowels, and 18 pairs contrasting only in r/l (e.g., right/light). These materials, spoken by a native American English speaker, were presented to fluent non-native speakers of English, 48 Dutch Nijmegen University students, and 48 Japanese Dokkyo University students; the listeners performed lexical decision on each spoken item, and response time was measured. Dutch listeners responded significantly faster to one member of a cattle/kettle pair after having heard the other member earlier in the list (compared with having heard a control word), suggesting that both words had been activated whichever had been heard. Japanese listeners, however, showed no such priming for cattle/kettle words, but did show repetition priming across r/l pairs such as right/light. Non-native listeners' phonemic discrimination difficulties thus generate pseudo-homophony.
Article
Three lexical decision experiments and one perceptual identification experiment investigated the effects of briefly presented forward-masked homophone primes on target recognition. When primes are the more frequent homophone of target words (e.g., real-REEL) then facilitation is observed relative to unrelated controls. This contrasts sharply with the inhibitory effects of more frequent orthographic primes (e.g., ride-RITE) observed in the same experimental conditions. However, the facilitation obtained with homophone primes turns to inhibition in the presence of pseudohomophone foils in the lexical decision task. Some tentative interpretations of these results are discussed. (C) 1994 Academic Press, Inc.
Article
Visual access to a printed word may be accompanied by a very rapid activation of phonetic properties of the word as well as its constituent letters. We suggest that such automatic activation during word identification, rather than only postlexical recoding, routinely occurs in reading. To demonstrate such activation, we varied the graphemic and phonetic properties shared by a word target and a following pseudoword mask. Graphemic (mard) and homophonic (mayd) masks, equated for number of letters shared with a word target (made), both showed a masking reduction effect relative to a control mask. There was an additional effect of the homophonic mask over the graphemic mask, attributable to phonetic activation. A second experiment verified this pattern of mask reduction effects using conditions that ruled out any explanation of the effect that does not take account of the target-mask relationship. We take the results to suggest that a phonetic activation nonoptionally occurs (prelexically) during lexical access.
Article
This study examined native Spanish speakers' perception of four English vowels (/i, ɪ, ɛ, æ/). In experiment 1, subjects used letters (〈i,e,a,o,u〉) to identify the vowels in beat, bit, bet, bat or responded “none” if they did not hear a Spanish vowel. The pattern of responses was unsurprising: mostly 〈i〉 for the English vowels /i/ and /ı/, 〈e〉 for the English /ɛ/s, and 〈a〉 for /æ/. Subjects who could speak English responded “none” significantly more often than Spanish monolinguals for all four words, suggesting they had begun to differentiate the English vowels from their nearest phonemic counterpart in Spanish. In experiment 2, subjects identified the members of continua which varied spectral quality (11 steps) and vowel duration (3 steps). Like native speakers, most Spanish subjects showed clear crossovers when identifying stimuli ranging from /bɛt/ to /bæt/, probably because the contiuum endpoints were identified with different Spanish vowels (viz., /e/ and /a/). Only six (30%) of the Spanish subjects, however, showed clear crossovers for a /bit/-to-/bɪt/ continuum, probably because the endpoints were identified with reference to a single Spanish vowel (/i/). The pattern of identification responses did not change systematically when stimuli differing in spectral quality were blocked on vowel duration in experiment 3. The results were interpreted to mean that even experienced Spanish speakers of English may not establish phonetic categories for “new” English vowels such as /ɪ/ and /æ/. [Work supported by NIH Grant 20963.]