Article

Lexical and Phonotactic Cues to Speech Segmentation in a Second Language

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... learners' speech segmentation is primarily led by lexical cues pertaining to the relative usage frequency of the target words, and secondarily from phonotactic cues pertaining to the alignment of syllable and word boundaries inside the carrier strings (Sinor, 2006). This difference in strategy leads to greater difficulty in processing connected speech because of the relatively less efficient use of lexical cues. ...
Chapter
Full-text available
Words spoken in context (in connected speech) often sound quite different from those same words when they are spoken in isolation. The pronunciation of words in connected speech may leave vowel and consonant sounds relatively intact, as in some types of linking, or connected speech may result in modifications to pronunciation that are quite dramatic, including deletions, additions, or changes of sounds into other sounds, or combinations of all three in a given word in context. Connected speech processes based on register may lead to what Cauldwell (2013) calls jungle listening. As a result, the pronunciation of connected speech may become a significant challenge to intelligibility, both the intelligibility of native speech for non-native listeners and the intelligibility of non-native speech for native listeners. Connected speech, perhaps more than other features of English pronunciation, demonstrates the importance of intelligibility in listening comprehension.
... Indeed, although communication in a non-native language has become more a rule than an exception in today's world, most research on speech segmentation deals with the listener's ability to recognize words in their native language. Surprisingly little is known about how L2 listeners solve the segmentation problem online, and how they acquire cues to word boundaries (but see Altenberg, 2005;Cutler, Mehler, Norris & Seguí, 1989;Golato, 2002;Sanders, Neville & Woldorff, 2002;Sinor, 2006; and for a theoretical discussion, Carroll, 2004). ...
Article
Full-text available
Do Slovak–German bilinguals apply native Slovak phonological and lexical knowledge when segmenting German speech? When Slovaks listen to their native language, segmentation is impaired when fixed-stress cues are absent (Hanulíková, McQueen & Mitterer, 2010), and, following the Possible-Word Constraint (PWC; Norris, McQueen, Cutler & Butterfield, 1997), lexical candidates are disfavored if segmentation leads to vowelless residues, unless those residues are existing Slovak words. In the present study, fixed-stress cues on German target words were again absent. Nevertheless, in support of the PWC, both German and Slovak listeners recognized German words (e.g., Rose “rose”) faster in syllable contexts (suckrose) than in single-consonant contexts (krose, trose). But only the Slovak listeners recognized, for example, Rose faster in krose than in trose (k is a Slovak word, t is not). It appears that non-native listeners can suppress native stress segmentation procedures, but that they suffer from prevailing interference from native lexical knowledge.
Chapter
Full-text available
The aim of our research is to understand how speech learning changes over the life span and to explain why "earlier is better" as far as learning to pronounce a second language (L2) is concerned. An assumption we make is that the phonetic systems used in the production and perception of vowels and consonants remain adaptiive over the life span, and that phonetic systems reorganize in response to sounds encountered in an L2 through the addition of new phonetic categories, or through the modification of old ones. The chapter is organized in the following way. Several general hypotheses concerning the cause of foreign accent in L2 speech production are summarized in the introductory section. In the next section, a model of L2 speech learning that aims to account for age-related changes in L2 pronunciation is presented. The next three sections present summaries of empirical research dealing with the production and perception of L2 vowels, word-initial consonants, and word-final consonants. The final section discusses questions of general theoretical interest, with special attention to a featural (as opposed to a segmental) level of analysis. Although nonsegmental (i.e., prosodic) dimensions are an important source of foreign accent, the present chapter focuses on phoneme-sized units of speech. Although many different languages are learned as an L2, the focus is on the acquisition of English.
Article
Full-text available
Spoken word recognition involves the segmentation and identification of a continuous and highly complex stimulus. It has been proposed that, in segmenting speech, listeners apply a universal rhythmic strategy that has language-specific manifestations depending on the phonological characteristics of their native language (Cutler, Mehler, Norris, & Segui, 1983, 1986): While native listeners of Romance languages like French are said to rely on syllabic structures, native listeners of Germanic languages like English or Dutch would use metrical structures. In the first part of the present paper, these proposals are discussed with regard to speech segmentation in monolinguals. It will be argued that word stress may provide powerful cues to word boundaries in both French and Dutch. The second part of the present contribution addresses the issue of speech segmentation in bilinguals, and, in particular, the claim that bilinguals develop a single rhythmic segmentation procedure restricted to their dominant language (Cutler, Mehler, Norris, & Segui, 1992). It will be argued instead that the use of adapted rhythmic segmentation cues is a necessary component of second language acquisition, and, consequently, that bilinguals who attain a high level of proficiency in their second language are able to exploit the rhythmic structures of that language in speech segmentation.
Article
Full-text available
Development of the second-language lexicon was investigated in two on-line experiments. In both experiments, priming was examined within the second language under automatic conditions and for nonnative speakers of two different levels of performance. Experiment 1 showed second-language priming for various lexical relationships for proficient nonnative speakers. Moreover, the results found for the proficient bilinguals were highly similar to those found for a group of native control subjects. Experiment 2 examined priming of the dominant and subordinate meanings of biased homographs such as “seal.” Priming was found for both meanings, for proficient bilinguals working in their second language, and for native control subjects, but only for the dominant meanings for a group of intermediate nonnative speakers. Again, the pattern of results found for the proficient bilinguals and native controls was highly similar. The ensemble of these results provides evidence for second-language autonomy, which is determined, however, by both level of expertise and type of lexical relationship. The autonomy postulated here is limited, moreover, to exploiting the second-language lexicon for the purposes of recognition and cannot at present be said to extend to production.
Article
Full-text available
In two experiments Dutch–English bilinguals were tested with English words varying in their degree of orthographic, phonological, and semantic overlap with Dutch words. Thus, an English word target could be spelled the same as a Dutch word and/or could be a near-homophone of a Dutch word. Whether such form similarity was accompanied with semantic identity (translation equivalence) was also varied. In a progressive demasking task and a visual lexical decision task very similar results were obtained. Both tasks showed facilitatory effects of cross-linguistic orthographic and semantic similarity on response latencies to target words, but inhibitory effects of phonological overlap. A third control experiment involving English lexical decision with monolinguals indicated that these results were not due to specific characteristics of the stimulus material. The results are interpreted within an interactive activation model for monolingual and bilingual word recognition (the Bilingual Interactive Activation model) expanded with a phonological and a semantic component.
Article
Full-text available
Adult Spanish second language (L2) learners of English and native speakers of English participated in an English perception task designed to investigate their ability to use L2 acoustic-phonetic cues, e.g., aspiration, to segment the stream of speech into words. Subjects listened to a phrase and indicated whether they heard, e.g., keep sparking or keeps parking. The results indicate that learners are significantly worse than native speakers at using acoustic-phonetic cues, and that some types of stimuli are easier for learners to segment than others. The findings suggest that various factors, including transfer and markedness, may be relevant to success in L2 segmentation.
Article
Full-text available
We propose that one important role for connectionist research in language acquisition is analysing what linguistic information is present in the child' s input. Recent connectionist and statistical work analysing the properties of real language corpora suggest that a priori objections against the utility of distributional information for the child are misguided. We illustrate our argument with examples of connectionist and statistical corpus-based research on phonology, segmentation, morphology, word classes, phrase structure, and lexical semantics. We discuss how this research relates to other empirical and theoretical approaches to the study of language acquisition.
Article
This research examined English speakers’ intuitions about the phonological ‘‘goodness’’ of nonsense words. Subjects rated bisyllabic, CVCCVC nonsense words that varied in phonotactic probability and stress placement. Using a ten‐point scale, subjects judged how English‐like the nonsense words sounded. Although all nonsense words were phonotactically legal in English, subjects showed strong preferences for stimuli composed of highly probable phonotactic contributions. Moreover, subjects judged nonsense words with strong–weak stress patterns as constituting ‘‘better’’ sounding English words than nonsense words with weak–strong patterns. No interaction between phonotactic probability and stress was observed. These results will be discussed in light of recent findings regarding adults’ (Auer and Luce, 1993) and infants’ (Jusczyk, Luce, and Charles‐Luce, 1994) sensitivity to the phonotactic configurations of spoken stimuli. [Work supported by NIDCD.]
Thesis
Gegenstand der Dissertation ist die Untersuchung interlingualer lexikalischer Prozesse von Worterkennung und-zugriff bei ungarndeutschen Bilingualen, die Englisch als Fremdsprache erlernen, unter besonderer Berücksichtigung der Rolle von Kognaten. Ziel der Studie ist es, die Prozesse lexikalischer Aktivierung in einem polyglotten System zu beschreiben und sowohl die mentalen Lexika, als auch die Verknüpfung und gegenseitige Aktivierung (z. B. durch 'direct word association' oder durch 'concept mediation') zu modellieren. Drei abhängige Variablen werden in einer quantitativen und qualitativen Analyse empirischer Daten untersucht: Genauigkeit, Antwortzeitlatenzen und phonologische Interferenz. Die Resultate der Experimente werden im Rahmen eines multilingualen Netzwerkmodells interpretiert.
Article
IntroductionThis chapter questions one of the most basic assumptions within syllable theory: namely that phonotactic constraints are largely syllable-based. In this chapter I argue that segmental and feature-based phonotactic constraints on consonant sequencing are most profitably viewed as syllable-independent statements. Evidence for the syllable-independent nature of phonotactics comes from three domains. First, it can be demonstrated that, language-internally, the syllable-based view of phonotactics is, in many cases, empirically inadequate. Second, cross-linguistic comparisons demonstrate that languages with arguably distinct syllabifications have identical phonotactic constraints. Third, emergent phonotactic universals on consonant sequencing are only evident when phonotactics are stated independent of syllable structure.Three important points need to be made at the outset. First, though I present evidence that phonotactics are to a large extent independent of syllable structure, I am not denying the existence of syllables. On the contrary, in many of the languages examined in this study, evidence for phonological syllables exists in the form of syllable-sensitive rules of stress assignment, syncope, vowel reduction, reduplication, and consistent judgments of syllabifications across speakers. Second, in languages where phonotactic statements and other arguably syllable-based statements do not converge on a single syllable structure, one might argue for “basic” and “derived” syllabifications for distinct phonological domains. However, if two distinct syllabifications are needed for many of the world's languages precisely where phonotactics are involved, then one alternative strategy is to consider the possibility that phonotactics are not syllable based.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
Résumé Cet article tente d’esquisser la contribution des phénomènes segmentaux et suprasegmentaux au décodage perceptif rudimentaire tel qu’il se manifeste chez le débutant en langue seconde. Selon Berkovits (1980), l’apport des habiletés phonétiques se ferait valoir surtout au niveau débutant étant donné que l’apprenant qui n’a aucune connaissance de la langue cible ne peut se prévaloir d’informations sémantiques, pragmatiques ou syntaxiques (Conrad, 1985) disponibles chez l’apprenant avancé ou le locuteur natif. Comme en témoignent les écrits recensés, qu’il s’agisse de la compréhension dans un but communicatif ou en vue du développement de la langue cible en général (au sens de Sharwood-Smith, 1986), il semble que le processus de décodage de compréhension auditive soit lié au développement des habiletés phonétiques dans la mesure où ces dernières contribuent à la segmentation de l’intrant, à l’accès lexical et constituent des indices linguistiques contextuels qui permettent d’accéder à un sens rudimentaire.
Article
Bilingualism provides a unique opportunity for exploring hypotheses about how the human brain encodes language. For example, the “input switch” theory states that bilinguals can deactivate one language module while using the other. A new measure of spoken language comprehension, headband-mounted eyetracking, allows a firm test of this theory. When given spoken instructions to pick up an object, in a monolingual session, late bilinguals looked briefly at a distractor object whose name in the irrelevant language was initially phonetically similar to the spoken word more often than they looked at a control distractor object. This result indicates some overlap between the two languages in bilinguals, and provides support for parallel, interactive accounts of spoken word recognition in general.
Article
This article examines ways in which teachers may modify their language according to the competence of their students and is based on research conducted using two groups of students who were receiving part-time EFL instruction at a college of Further Education in the UK. An essential ingredient of this research project was the fact that instead of having two teachers, one for each group, only one teacher taught both groups thus removing from the corpus one of the most important variables, namely differences between the speech of two or more teachers. After presenting a list of parameters by which the corpus for both groups of students can be analyzed and suggesting a new unit to be used in the analysis of spoken discourse, a comparative analysis of the results for both groups will be presented. This will be used to support the hypothesis that teachers do modify their speech according to the competence of their students. An awareness and understanding of why this happens and the role of these modifications in SLA pedagogy will be discussed.
Article
This paper examines the problem of lexical segmentation in order to identify language-specific and language-independent properties of lexical processing systems. A potential universal of lexical segmentation is first evaluated for its efficacy across languages. This postlexical segmentation strategy assumes that listeners recognize each word rapidly enough to be able to predict both its offset and the onset of the following word. It is argued that this strategy cannot account for every lexical segmentation decision in utterances in any language. Listeners must also use other types of segmentation information. Two types of segmentation information, distributional and relational, which appear to vary crosslinguistically, are presented with examples. To exploit this information, listeners must infer word boundaries or boundaries of other domains from events or sequences of events in the signal. The problem posed for current models of spoken word recognition by the integration of such segmentation information with that from the putative postlexical strategy is discussed.
Article
This paper presents a theory of inductive learning (i-learning), a form of induction which is neither concept learning nor hypothesis-formation, but rather which takes place within the autonomous and modular representational systems (levels of representation) of the language faculty. The theory is called accordingly the Autonomous Induction Theory. Second language acquisition (SLA) is conceptualized in this theory as: • learning linguistic categories from universal and potentially innate featural primitives; • learning configurations of linguistic units; and • learning correspondences of configurations across the autonomous levels. Here I concentrate on the problem of constraining learning theories and argue that the Autonomous Induction Theory is constrained enough to be taken seriously as a plausible approach to explaining second language acquisition.
Article
Language realized as speech undergoes a striking metamorphosis from pre-dynamic citation form strings (words, sentences, text) to dynamic running speech (the flow of quasi-uninterrupted acoustic energy). This transition is not possible without a host of absorption processes that alter segmental sequences through assimilation, reduction, loss and similar levelling features and result in restructured syllabication. The phonostylistics of the spoken language, as the statistical analysis of a representative sample of informal American speech shows, exposes vowels and consonants, words and sentences as perhaps less pivotal entities in running speech than sonorants and obstruents, syllables and syllabic phrases (i.e. “runs” of articulated speech defined by their respective pausal envelopes). Such an inquiry into the nature of running speech has implications, not only for our understanding of the properties of the oral language, but also for second language acquisition in the way listening comprehension and oral fluency acquisition may be facilitated.
Article
This paper describes a computerised database of psycholinguistic information. Semantic, syntactic, phonological and orthographic information about some or all of the 98,538 words in the database is accessible, by using a specially-written and very simple programming language. Word-association data are also included in the database. Some examples are given of the use of the database for selection of stimuli to be used in psycholinguistic experimentation or linguistic research. © 1981, The Experimental Psychology Society. All rights reserved.
Article
Using gender decision and shadowing tasks, we compared recognition of French nouns with early or late uniqueness points (UP) that were articulated at three different rates. With gender decision, the medium rate (3.6 syllables (syll)/s), which is close to that used by Radeau, Mousty, and Bertelson (1989), gave rise to a comparable UP location effect. The effect increased at the slower rate (2.2 syll/s), but disappeared at the faster rate (5.6 syll/s). With shadowing, only the slow rate gave rise to a UP effect. A similar pattern of results was found using speech that was linearly compressed or expanded. Because the fast rate is close to that typical of conversational speech, the present results cast doubt on the relevance of the UP in the processing of fluent speech. The implications of rate effects for models of spoken word recognition are discussed.
Article
Recent work (Vitevitch & Luce, 1998) investigating the role of phonotactic information in spoken word recognition suggests the operation of two levels of representation, each having distinctly different consequences for processing. The lexical level is marked by competitive effects associated with similarity neighborhood activation, whereas increased probabilities of segments and sequences of segments facilitate processing at the sublexical level. We investigated the two proposed levels in six experiments using monosyllabic and specially constructed bisyllabic words and nonwords. The results of these studies provide further support for the hypothesis that the processing of spoken stimuli is a function of both facilitatory effects associated with increased phonotactic probabilities and competitive effects associated with the activation of similarity neighborhoods. We interpret these findings in the context of Grossberg, Boardman, and Cohen's (1997) adaptive resonance theory of speech perception.
Article
A series of progressive demasking and lexical decision experiments investigated how the recognition of target words exclusively belonging to one language is affected by the existence of orthographic neighbors from the same or the other language of bilingual participants. Increasing the number of orthographic neighbors in Dutch systematically slowed response times to English target words in Dutch/English bilinguals, while an increase in target language neighbors consistently produced inhibitory effects for Dutch and facilitatory effects for English target words. Monolingual English speakers also showed facilitation due to English neighbors, but no effect of Dutch neighbors. The experiments provide evidence for parallel activation of words in an integrated Dutch/English lexicon. An implemented version of such a model making these assumptions, the Bilingual Interactive Activation (BIA) model, is shown to account for the overall pattern of results.
Article
In describing the phonotactics (patterning of phonemes) of English syllables, linguists have focused on absolute restrictions concerning which phonemes may occupy which slots of the syllable. To determine whether probabilistic patterns also exist, we analyzed the distributions of phonemes in a reasonably comprehensive list of uninflected English CVC (consonant–vowel–consonant) words, some 2001 words in all. The results showed that there is a significant connection between the vowel and the following consonant (coda), with certain vowel–coda combinations being more frequent than expected by chance. In contrast, we did not find significant associations between the initial consonant (onset) and the vowel. These findings support the idea that English CVC syllables are composed of an onset and a vowel–coda rime. Implications for lexical processing are discussed.
Article
Word frequency influences both the production and perception of speech. Speakers increase stress for infrequent words, and poor listening conditions cause listeners to mistake rare words for common ones, but not vice versa. People can also directly estimate relative word frequencies to within one order of magnitude of their objective values. To determine if word frequency effects were determined at least in part by phonotactics (versus by semantics), subjects were asked to estimate word frequencies for a list of words containing both real and nonsense words. Subjects duplicated previous results in their ability to judge the frequency of real words, and showed significant agreement in their judgements of “frequencies” for nonsense words. Subjects' judgments for the frequency of nonsense words showed significant correlation with Greenberg and Jenkins' measure for distance from English, implying that word frequencies are judged by estimating the density of similar sounding words in the lexicon. These results suggest additions to lexical distance metrics that would improve performance of speech recognition systems: small vocabulary systems could tabulate word frequency during system usage and bias recognition towards more frequently used words, while large vocabulary systems could use lexical density to modulate calculated distances between candidate words and the input utterance.
Article
Three experiments are reported in which picture naming and bilingual translation were performed in the context of semantically categorized or randomized lists. In Experiments 1 and 3 picture naming and bilingual translation were slower in the categorized than randomized conditions. In Experiment 2 this category interference effect in picture naming was eliminated when picture naming alternated with word naming. Taken together, the results of the three experiments suggest that in both picture naming and bilingual translation a conceptual representation of the word or picture is used to retrieve a lexical entry in one of the speaker's languages. When conceptual activity is sufficiently great to activate a multiple set of corresponding lexical representations, interference is produced in the process of retrieving a single best lexical candidate as the name or translation. The results of Experiment 3 showed further that category interference in bilingual translation occurred only when translation was performed from the first language to the second language, suggesting that the two directions of translation engage different interlanguage connections. A model to account for the asymmetric mappings of words to concepts in bilingual memory is described. (C) 1994 Academic Press, Inc.
Article
Listeners appear to use phonotactic constraints in the segmentation of continuous speech. Because some strings of phonemes (such as [lv] and [mr] in Dutch) never occur within the same syllable, they cue syllable boundaries, and hence possible word boundaries. Dutch listeners found it easier to detect words at the beginnings of nonsense sequences when the words were aligned with a phonotactic boundary (e.g.,pil,pill, in [pil.vrem]) than when they were misaligned (e.g.,pilin [pilm.rem]). This effect was stronger for words at the ends of nonsense sequences (e.g.,rok,skirt, in [fim.rɐk] and [fi.drɐk]). Two control experiments indicated that although part of this effect can be attributed to the role of phonotactics in speech production, there remains a significant perceptual component; the legality of sound sequences appears to be computed on-line during recognition. It is argued that segmentation is achieved via competition between candidate words, that competition is modulated by knowledge about where in the input candidates are unlikely to begin or end, and that phonotactic constraints are one of several information sources used in this segmentation process.
Article
Segmentation of continuous speech into its component words is a nontrivial task for listeners. Previous work has suggested that listeners develop heuristic segmentation procedures based on experience with the structure of their language; for English, the heuristic is that strong syllables (containing full vowels) are most likely to be the initial syllables of lexical words, whereas weak syllables (containing central, or reduced, vowels) are nonword-initial, or, if word-initial, are grammatical words. This hypothesis is here tested against natural and laboratory-induced missegmentations of continuous speech. Precisely the expected pattern is found: listeners erroneously insert boundaries before strong syllables but delete them before weak syllables; boundaries inserted before strong syllables produce lexical words, while boundaries inserted before weak syllables produce grammatical words.
Article
Current theories of spoken-word recognition posit two levels of representation and process: lexical and sublexical. By manipulating probabilistic phonotactics and similarity-neighborhood density, we attempted to determine if these two levels of representation have dissociable effects on processing. Whereas probabilistic phonotactics have been associated with facilitatory effects on recognition, increases in similarity-neighborhood density typically result in inhibitory effects on recognition arising from lexical competition. Our results demonstrated that when the lexical level is invoked using real words, competitive effects of neighborhood density are observed. However, when strong lexical effects are removed by the use of nonsense word stimuli, facilitatory effects of phonotactics emerge. These results are consistent with a two-level framework of process and representation embodied in certain current models of spoken-word recognition.
Article
Information on word recognition and lexical access has been dominated by research on English. In French, with its different prosodic coding, showing the phonological processes of liaison, enchainement, and e‐deletion, represents an opportunity to widen the understanding of speech recognition. A listening test was carried out containing 20 utterances and ten distractors ranging from one to six syllables and forming seven pairs and two triplets of supposedly ‘‘identical’’ linguistic phrases. A representative sample of each type produced by a male and a female French speaker was presented five times in random order to 18 native listeners of French (including the two speakers) and nine Swedish learners of French as a foreign language. No group identified any of the original utterances correctly, and only two pairs of stimuli were identified at random. Taking into account the frequencies of the stimuli, it appears that most stimuli contain some acoustic information that guides recognition. An acoustic analysis...
Book
It is well-known that phonemes have different acoustic realizations depending on the context. Thus, for example, the phoneme /t! is typically realized with a heavily aspirated strong burst at the beginning of a syllable as in the word Tom, but without a burst at the end of a syllable in a word like cat. Variation such as this is often considered to be problematic for speech recogni tion: (1) "In most systems for sentence recognition, such modifications must be viewed as a kind of 'noise' that makes it more difficult to hypothesize lexical candidates given an in put phonetic transcription. To see that this must be the case, we note that each phonological rule [in a certain example] results in irreversible ambiguity-the phonological rule does not have a unique inverse that could be used to recover the underlying phonemic representation for a lexical item. For example, . . . schwa vowels could be the first vowel in a word like 'about' or the surface realization of almost any English vowel appearing in a sufficiently destressed word. The tongue flap [(] could have come from a /t! or a /d/. " [65, pp. 548-549] This view of allophonic variation is representative of much of the speech recognition literature, especially during the late 1970's. One can find similar statements by Cole and Jakimik [22] and by Jelinek [50]."
Article
Two experiments were conducted to determine the functional status of cognates. Two hypotheses were considered. According to the first hypothesis, language is a critical feature governing lexical organization, and cognates may therefore be equated with morphologically unrelated translations. According to a second hypothesis, however, language is not a critical feature governing lexical organization. Instead, the boundaries between perceptual categories are determined by morphological considerations, and cognates may therefore be equated with intra-lingual variations such as inflections and derivations. If the first hypothesis is correct, cognate performance should follow that observed for translations, but if the second hypothesis is correct, cognate performance should follow that observed for inflections and derivations. The experiments used different procedures in order to discount taskspecific explanations. The first experiment involved repetition priming in a lexical decision task, and emphasis was placed on relative priming; that is, on the amount of facilitation which occurs when, for example, OBEDIENCIA primes OBEDIENCE, expressed as a fraction of the amount of facilitation that occurs when the same word is presented on each occasion (i.e., when OBEDIENCE is used to prime OBEDIENCE). The second experiment tested memory for language. Four types of cognates were tested. These were: orthographically identical cognates, regular cognates with cion/tion substitution, regular cognates with dad/ty substitution, and irregularly derived cognates. The results were unequivocal. The priming values observed previously for cognates were qualitatively and quantitatively similar to those observed for inflections and derivations, and this classification was confirmed in the second experiment, involving memory for language. The results are consistent with the general proposition that morphology rather than language governs the boundaries between perceptual categories, and a number of specific explanations are reviewed.
Article
This article reports on a diary study that revealed beliefs and knowledge second language learners had about their listening. From an analysis of the diaries of 40 ESL learners, it was found that many of them had clear ideas about three aspects of listening: their own role and performance as second language listeners, the demands and procedures of second language listening, and strategies for listening. The article discusses the implications of these findings for the teaching and learning of listening in ELT programmes. It calls for more discussion to increase learners’ metacognitive awareness in listening, and argues for the use of listening diaries as a learning tool for this purpose.