Article

Examining the effectiveness of bilingual subtitles for comprehension: An eye-tracking study

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

The present study examined the relative effectiveness of bilingual subtitles for L2 viewing comprehension, compared to other subtitling types. Learners’ allocation of attention to the image and subtitles/captions in different viewing conditions, as well as the relationship between attention and comprehension, were also investigated. A total of 112 Chinese learners of English watched an English documentary clip in one of four conditions (bilingual subtitles, captions, L1 subtitles, no subtitles) while their eye movements were recorded. The results revealed that bilingual subtitles were as beneficial as L1 subtitles for comprehension, which both outscored captions and no subtitles. Participants using bilingual subtitles spent significantly more time processing L1 than L2 lines. L1 lines in bilingual subtitles were processed significantly longer than in L1 subtitles, but L2 lines were processed significantly shorter than in captions. No significant relationship was found between the processing time and comprehension for either the L1 or L2 lines of bilingual subtitles.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... We present two worked examples using data collected by Wang (2022) on learning from subtitled viewing. Wang's (2022) research aimed to investigate the effects of different subtitling types (i.e., captions, L1 subtitles, and bilingual subtitles) on L2 learners' comprehension ( Wang & Pellicer-Sánchez, 2022a ) and incidental vocabulary learning ( Wang & Pellicer-Sánchez, 2022b ), and explored learners' engagement with unknown words during viewing using eye-tracking and stimulated recalls. In Wang's (2022) research, a number of unknown words from the video were selected as target words (TWs) and participants' prior knowledge of those words was tested by means of pre-tests. ...
Article
Full-text available
Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.
Article
Full-text available
This study examined the effectiveness of bilingual subtitles relative to captions, subtitles, and no subtitles for incidental vocabulary learning. Learners’ processing of novel words in the subtitles and its relationship to learning gains were also explored. While their eye movements were recorded, 112 intermediate to advanced Chinese learners of English watched a documentary in one of 4 conditions: bilingual subtitles, captions, L1 subtitles, and no subtitles. Vocabulary pretests and posttests assessed the participants’ knowledge of the target vocabulary for form recognition, meaning recall, and meaning recognition. Results suggested an advantage for bilingual subtitles over captions for meaning recognition and over L1 subtitles for meaning recall. Bilingual subtitles were less effective than captions for form recognition. Participants in the bilingual subtitles group spent more time reading the Chinese translations of the target items than the English target words. The amount of attention to the English target words (but not to the translations) predicted learning gains.
Article
Full-text available
While the use of dual subtitles (concurrent L1 subtitles and L2 captions) has been studied in L2 research, more studies are needed to better understand the impact that this on-screen textual aid can have on vocabulary learning and comprehension. Therefore, this study explored if there were significant differences in vocabulary learning and listening comprehension between EFL students who watched L2 videos with L1 subtitles, L2 captions, and dual subtitles. Participants (N=96) were quasi-randomly divided into three equal groups (n=32) under each on-screen textual aid condition and viewed an episode from a sitcom through Netflix. Pre-and post-tests were administered to measure gains in vocabulary learning at two different levels among 20 target words that appeared in the episode. A 15-item listening comprehension test was also administered post-viewing to determine if there were significant differences in comprehension. Results indicated that the L1 subtitles and dual subtitles groups performed better than the L2 captions group in terms of vocabulary learning, whereas the participants who viewed the episode with dual subtitles did significantly better than the other two groups in listening comprehension. These findings suggest that L1 subtitles, either alone or with L2 captions, are key to supporting vocabulary learning and comprehension of video.
Article
Full-text available
Theories of multimedia learning suggest that learners can form better referential connections when verbal and visual materials are presented simultaneously. Furthermore, the addition of auditory input in reading-while-listening conditions benefits performance on a variety of linguistic tasks. However, little research has been conducted on the processing of multimedia input (written text and images) with and without accompanying audio. Eye movements were recorded during young L2 learners’ ( N = 30) processing of a multimedia story text in reading-only and reading-while-listening conditions to investigate looking patterns and their relationship with comprehension using a multiple-choice comprehension test. Analysis of the eye-movement data showed that the presence of audio in reading-while-listening conditions allowed learners to look at the image more often. Processing time on text was related to lower levels of comprehension, whereas processing time on images was positively related to comprehension.
Article
Full-text available
This study investigated the effects of different types of captions on English as a Foreign Language Learners’ (EFL) vocabulary learning and comprehension. Eighty students in a Chinese university participated. Students were divided into four groups with two classes of freshmen, one class of juniors, and one class of graduate students. Each group watched four video clips with four caption conditions: L1 Chinese, L2 English, dual (L1 and L2), and no captions. The order and caption conditions were counterbalanced. The purpose of the study was to find which caption condition is more effective for EFL learners. Four by four mixed ANOVAs were used to compare the differences among the four conditions and groups. Results indicated that students’ performances were statistically significantly different across captions and class levels. In general, students in L1, L2, and dual captions statistically outperformed the no caption condition in vocabulary and comprehension. Results of the effects of L1, L2, and dual captions on vocabulary learning and comprehension were mixed. The pedagogical implications of using authentic TV series and multimedia captions were discussed.
Article
Full-text available
In the past years, there has been a surge in the number of studies focusing on learning vocabulary from audiovisual input. These studies have shown that learners can pick up new words incidentally when watching TV (Peters & Webb, 2018; Rodgers & Webb, 2019). Research has also shown that the presence of on-screen text (L1 or L2 subtitles) might increase learning gains (Montero Perez et al., 2014; Winke et al., 2010). Learning is sometimes explained in terms of the beneficial role of on-screen imagery in audiovisual input (Rodgers, 2018). However, little is known about the effect of imagery on word learning and how it might interact with L1 subtitles and captions. This study investigates the effect of imagery in three TV viewing conditions: (1) with L1 subtitles, (2) with captions, and (3) without subtitles. Data were collected from 142 Dutch-speaking EFL learners. A pretest-posttest design was adopted in which learners watched a 12-minute excerpt from a documentary. The findings show that the captions group made the most vocabulary learning gains. Secondly, imagery was positively related to word learning. This means that words that were shown in close proximity to the aural occurrence of the words were more likely to be learned.
Article
Full-text available
Captions provide a useful aid to language learners for comprehending videos and learning new vocabulary, aligning with theories of multimedia learning. Multimedia learning predicts that a learner's working memory (WM) influences the usefulness of captions. In this study, we present two eye-tracking experiments investigating the role of WM in captioned video viewing behavior and comprehension. In Experiment 1, Spanish-as-a-foreign-language learners differed in caption use according to their level of comprehension and to a lesser extent, their WM capacities. WM did not impact comprehension. In Experiment 2, English-as-a-second-language learners differed in comprehension according to their WM capacities. Those with high comprehension and high WM used captions less on a second viewing. These findings highlight the effects of potential individual differences and have implications for the integration of multimedia with captions in instructed language learning. We discuss how captions may help neutralize some of working memory's limiting effects on learning.
Article
Full-text available
With the proliferation and global dissemination of audiovisual products, subtitles have been widely used as a cost-effective tool to minimise language barriers for audiences of diverse cultural and linguistic backgrounds. However, the effectiveness of subtitles is still a topic of much debate and subject to various conditions, such as the context of use, the subtitle type, and the relationship between the language of the soundtrack and that of the subtitles. Drawing on an analysis of eye movements and a self-reported questionnaire, this study compares the impact of bilingual subtitles to that of monolingual subtitles in terms of viewers’ visual attention distribution, cognitive load, and overall comprehension of video content. Twenty Chinese (L1) native speakers watched four videos with English (L2) audio, each in a different condition: with Chinese subtitles (interlingual/L1 subtitles), with English subtitles (intralingual/L2 subtitles), with both Chinese and English subtitles (bilingual subtitles), and without subtitles. Our results indicate that viewers’ visual attention allocation to L1 subtitles was more stable than to L2 subtitles and less sensitive to the increased visual competition in the bilingual condition, which, we argue, can be attributed to the language dominance of their native language. Bilingual subtitles as a combination of intralingual and interlingual subtitles did not appear to induce more cognitive load or produce more cognitive gain than monolingual subtitles. Compared with the no subtitles condition, however, we found bilingual subtitles to be more beneficial as they provided linguistic support to make the video easier to comprehend and facilitate the learning process.
Article
Full-text available
Count data can be analyzed using generalized linear mixed models when observations are correlated in ways that require random effects. However, count data are often zero-inflated, containing more zeros than would be expected from the typical error distributions. We present a new package, glmmTMB, and compare it to other R packages that fit zero-inflated mixed models. The glmmTMB package fits many types of GLMMs and extensions, including models with continuously distributed responses, but here we focus on count responses. glmmTMB is faster than glmmADMB, MCMCglmm, and brms, and more flexible than INLA and mgcv for zero-inflated modeling. One unique feature of glmmTMB (among packages that fit zero-inflated mixed models) is its ability to estimate the Conway-Maxwell-Poisson distribution parameterized by the mean. Overall, its most appealing features for new users may be the combination of speed, flexibility, and its interface's similarity to lme4.
Article
Full-text available
One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using the Kenward-Roger approximation for denominator degrees of freedom (based on the KRmodcomp function from the pbkrtest package). Some other convenient mixed model analysis tools such as a step method, that performs backward elimination of nonsignificant effects - both random and fixed, calculation of population means and multiple comparison tests together with plot facilities are provided by the package as well.
Article
Full-text available
The purpose of the paper is to present a contrastive audio-textual approach to teaching English which has been made possible with the advance of modern technology. It tackles the problem of how to provide effective learning of English to students with different backgrounds and interests and particularly to those who are unable for any reasons to receive quality on-site English education. The method sees language learning as repetition of chunks of speech based on three ways of perception (audio perception of authentic English, visual perception of parallel English and the learner's native language texts and visual representation of image) simultaneously. Materials comprise specifically prepared video files with English and L2 subtitles. Perception of parallel texts is reinforced by visual representations of events. A special translation technique helps to adapt materials to different groups of learners. The method tackles varied learning needs of students of different age groups. The proposed method is originally designed for teaching English, yet can as well be applied to other languages.
Article
Full-text available
The calculation and use of effect sizes—such as d for mean differences and r for correlations—has increased dramatically in second language (L2) research in the last decade. Interpretations of these effects, however, have been rare and, when present, have largely defaulted to Cohen's levels of small (d = .2, r = .1), medium (.5, .3), and large (.8, .5), which were never intended as prescriptions but rather as a general guide. As Cohen himself and many others have argued, effect sizes are best understood when interpreted within a particular discipline or domain. This article seeks to promote more informed and field-specific interpretations of d and r by presenting a description of L2 effects from 346 primary studies and 91 meta-analyses (N > 604,000). Results reveal that Cohen's benchmarks generally underestimate the effects obtained in L2 research. Based on our analysis, we propose a field-specific scale for interpreting effect sizes, and we outline eight key considerations for gauging relative magnitude and practical significance in primary and secondary studies, such as theoretical maturity in the domain, the degree of experimental manipulation, and the presence of publication bias.
Article
Full-text available
Maximum likelihood or restricted maximum likelihood (REML) estimates of the parameters in linear mixed-effects models can be determined using the lmer function in the lme4 package for R. As for most model-fitting functions in R, the model is described in an lmer call by a formula, in this case including both fixed- and random-effects terms. The formula and data together determine a numerical representation of the model from which the profiled deviance or the profiled REML criterion can be evaluated as a function of some of the model parameters. The appropriate criterion is optimized, using one of the constrained optimization functions in R, to provide the parameter estimates. We describe the structure of the model, the steps in evaluating the profiled deviance or REML criterion, and the structure of classes or types that represents such a model. Sufficient detail is included to allow specialization of these structures by users who wish to write functions to fit specialized linear mixed models, such as models incorporating pedigrees or smoothing splines, that are not easily expressible in the formula language used by lmer.
Article
Full-text available
Audiovisual material enhanced with captions or interlingual subtitles is a particularly powerful pedagogical tool which can help improve the listening comprehension skills of second-language learners. Captioning facilitates language learning by helping students visualize what they hear, especially if the input is not too far beyond their linguistic ability. Subtitling can also increase language comprehension and leads to additional cognitive benefits, such as greater depth of processing. However, learners often need to be trained to develop active viewing strategies for an efficient use of captioned and subtitled material. Multimedia can offer an even wider range of strategies to learners, who can control access to either captions or subtitles.
Article
Full-text available
This study aims to explore the impact of different captions on second language (L2) learning in a computer-assisted multimedia context. A quasi-experimental design was adopted, and a total of thirty-two eighth graders selected from a junior high school joined the study. They were systematically assigned into four groups based on their proficiency in English; these groups were shown animations with English narration and one of the following types of caption: no captions (M1), Chinese captions (M2), English captions (M3), and Chinese plus English captions (M4). A multimedia English learning program was conducted; the learning content involved two scientific articles presented on a computer. To track the learning process, data on oral repetition were collected after each sentence or scene was played. A post-test evaluation and a semi-structured interview were conducted immediately after viewing. The results show that the effect of different captions in multimedia L2 learning with respect to vocabulary acquisition and reading comprehension depend on students’ L2 proficiency. With English and Chinese + English captions, learners with low proficiency performed better in learning English relative to those who did not have such captions. Students relied on graphics and animation as an important tool for understanding English sentences.
Article
Full-text available
Foreign language (FL) films with subtitles are becoming increasingly popular, and many European countries use subtitling as a cheaper alternative to dubbing. However, the extent to which people process subtitles under different subtitling conditions remains unclear. In this study, participants watched part of a film under standard (FL soundtrack and native language subtitles), reversed (native language soundtrack and FL subtitles), or intralingual (FL soundtrack and FL subtitles) subtitling conditions while their eye movements were recorded. The results revealed that participants read the subtitles irrespective of the subtitling condition. However, participants exhibited more regular reading of the subtitles when the film soundtrack was in an unknown FL. To investigate the incidental acquisition of FL vocabulary, participants also completed an unexpected auditory vocabulary test. Because the results showed no vocabulary acquisition, the need for more sensitive measures of vocabulary acquisition are discussed. Finally, the reading of the subtitles is discussed in relation to the saliency of subtitles and automatic reading behavior.
Article
Full-text available
Many meta-analysts incorrectly use correlations or standardized mean difference statistics to compute effect sizes on dichotomous data. Odds ratios and their logarithms should almost always be preferred for such data. This article reviews the issues and shows how to use odds ratios in meta-analytic data, both alone and in combination with other effect size estimators. Examples illustrate procedures for estimating the weighted average of such effect sizes and methods for computing variance estimates, confidence intervals, and homogeneity tests. Descriptions of fixed- and random-effects models help determine whether effect sizes are functions of study characteristics, and a random-effects regression model, previously unused for odds ratio data, is described. Although all but the latter of these procedures are already widely known in areas such as medicine and epidemiology, the absence of their use in psychology suggests a need for this description. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Full-text available
In this article, we discuss the use of eye movement data to assess moment-to-moment comprehension processes. We first review some basic characteristics of eye move-ments during reading and then present two studies in which eye movements are mon-itored to confirm that eye movements are sensitive to (a) global text passage difficulty and (b) inconsistencies in text. We demonstrate that processing times increased (and especially that the number of fixations increased) when text is difficult. When there is an inconsistency, readers fixated longer on the region where the inconsistency oc-curred. In both studies, the probability of making a regressive eye movement in-creased as well. Finally, we discuss the use of eye movement recording as a research tool to further study moment-to-moment comprehension processes and the possibil-ity of using this tool in more applied school settings. Comprehension can be seen as the product of the development and coordination of various reading competencies, including word recognition, reading fluency, syn-tactic processing, and knowledge of word meanings. This multifaceted nature of reading makes comprehension skill a sensitive barometer of overall reading devel-opment, particularly in older children. However, a low comprehension score does not specify which underlying difficulties contribute to it. Thus, identifying the fac-tors that contribute to impaired comprehension continues to challenge researchers. Our view is that monitoring eye movements during reading can provide valu-able information regarding moment-to-moment comprehension processes (see also Rayner, 1997, 1998). The data we present are based on eye movement mea-SCIENTIFIC STUDIES OF READING, 10(3), 241–255 Copyright © 2006, Lawrence Erlbaum Associates, Inc.
Article
Full-text available
Cognitive load theory suggests that effective instructional material facilitates learning by directing cognitive resources toward activities that are relevant to learning rather than toward preliminaries to learning. One example of ineffective instruction occurs if learners unnecessarily are required to mentally integrate disparate sources of mutually referring information such as separate text and diagrams. Such split-source infonnation may generate a heavy cognitive load, because material must be mentally integrated before learning can commence. This article reports findings from six experiments testing the consequences of split-source and integrated information using electrical engineering and biology instructional materials. Experiment 1 was designed to compare conventional instructions with integrated instructions over a period of several months in an industrial training setting. The materials chosen were unintelligible without mental integration. Results favored integrated instructions throughout the 3-month study. Experiment 2 was designed to investigate the possible differences between conventional and integrated instructions in areas in which it was not essential for sources of information to be integrated to be understood. The results suggest that integrated instructions were no better than split-source infonnation in such areas. Experiments 3, 4, and 5 indicate that the introduction of seemingly useful but nonessential explanatory material (e.g., a commentary on a diagram) could have deleterious effects even when presented in integrated format. Experiment 6 found that the need for physical integration was restored if the material was organized in such a manner that individual units could not be understood alone. In light of these results and previous findings, suggestions are made for cognitively guided instructional packages.
Article
Full-text available
This study investigated the effects of captioning during video-based listening activities. Second- and fourth-year learners of Arabic, Chinese, Spanish, and Russian watched three short videos with and without captioning in randomized order. Spanish learners had two additional groups: one watched the videos twice with no captioning, and another watched them twice with captioning. After the second showing of the video, learners took comprehension and vocabulary tests based on the video. Twenty-six learners participated in interviews following the actual experiment. They were asked about their general reactions to the videos (captioned and noncaptioned). Results from t-tests and two-way ANOVAs indicated that captioning was more effective than no captioning. Captioning during the first showing of the videos was more effective for performance on aural vocabulary tests. For Spanish and Russian, captioning first was generally more effective than captioning second; while for Arabic and Chinese, there was a trend toward captioning second being more effective. The interview data revealed that learners used captions to increase their attention, improve processing, reinforce previous knowledge, and analyze language. Learners also reported using captions as a crutch.
Article
Comprehension of many types of texts involves constructing meaning from text and pictures. However, research examining how second language (L2) learners process text and pictures and the relationship with comprehension is scarce. Thus, while verbal input is often presented in written and auditory modes simultaneously (i.e., audio of text with simultaneous reading of it), we do not know how the auditory input affects L2 adult learners’ processing of text and pictures and its relation to comprehension. In the current study, L2 adult learners and native (L1) adults read and read while listening to an illustrated story while their eye movements were recorded. Immediately after reading, they completed a comprehension test. Results showed that the presence of auditory input allowed learners to spend more time looking at pictures and supported a better integration of text and pictures. No differences were observed between L2 and L1 readers’ allocation of attention to text and pictures. Both reading conditions led to similar levels of comprehension. Processing time on the text was positively related to comprehension for L2 readers, while it was associated to lower comprehension for L1 readers. Processing time on images was positively related to comprehension only for L1 readers.
Article
This study investigated the effects of four subtitle modes on the listening comprehension of TED (Technology, Entertainment, Design) talks and academic vocabulary learning of intermediate (non-English major) and advanced (English major) English as foreign language (EFL) learners. A total of 272 Chinese college sophomore students were randomly assigned to one of the four experimental groups: Dual subtitles, English subtitles, Chinese subtitles, and no subtitles. The participants viewed four TED talks videos over a period of two weeks and were pretested (vocabulary) and posttested (vocabulary, listening comprehension). The results demonstrated that there were no statistically significant differences among the four subtitle modes for intermediate learners. However, the findings showed significant differences among the four subtitle modes for advanced learners. Specifically, the no subtitle and dual subtitle groups performed significantly better than the English subtitle group on vocabulary learning. The Chinese subtitle group significantly outper-formed the no subtitle group on listening comprehension. Taken together, the results for advanced English students indicated that the redundancy effect may exist in vocabulary learning and that adding dual subtitles may not impose high cognitive load. Accordingly, this study offered important implications for learning English through watching videos using different subtitle modes for Chinese students.
Book
The book outlines the major areas of listening research in an accessible manner and provides language teachers with guidelines to design and develop suitable listening tests for their students.
Article
This study explores the differential effects of captions and subtitles on extensive TV viewing comprehension by adolescent beginner foreign language learners, and how their comprehension is affected by factors related to the learner, preteaching of target vocabulary, the lexical coverage of the episodes, and the testing instruments. Four classes of secondary school students took part in an 8-month intervention viewing 24 episodes of a TV series, two classes with captions, and two with subtitles. One class in each language condition received explicit instruction on target vocabulary. Comprehension was assessed through multiple-choice and true-false items, which included a combination of textually explicit and inferential items. Results showed a significant advantage of subtitles over captions for content comprehension, and prior vocabulary knowledge emerged as a significant predictor—particularly in the captions condition. Comprehension scores were also mediated by test-related factors, with true-false items receiving overall more correct responses while textually explicit and inferential items scores differed according to language of the on-screen text. Lexical coverage also emerged as a significant predictor of comprehension.
Article
This study launched an investigation into the extent to which textual enhancement in captions can promote learner attention to and subsequent development in second language (L2) grammar. Using eye‐tracking, it also intended to extend research on the relationship between attention and L2 learning. A pretest–posttest experimental design was employed, with 3 treatment sessions. Forty‐eight Korean learners of L2 English were randomly assigned into a captions group (n = 24) and an enhanced captions group (n = 24). For the enhanced captions group, the components of pronominal anaphoric reference were boldfaced in the treatment task input. Learner attention to anaphora antecedents and personal pronouns was assessed with eye‐movement indices, and a written and an oral grammaticality judgment test were used to measure learning gains. Textual enhancement succeeded in directing learner attention to the anaphora antecedents, and led to increased gains in receptive knowledge of pronominal anaphoric reference. However, significant links between attention and L2 development were only observed for the unenhanced captions group. The findings, overall, demonstrate that textually enhanced captioning is a useful pedagogical tool to facilitate development in L2 grammatical knowledge.
Article
Previous studies have indicated the potential for incidental vocabulary learning through viewing television. The assumption has been that the imagery in television helps learners acquire vocabulary because when they hear an unfamiliar word, the on-screen images provide semantic support. However, the extent to which imagery in authentic television supports learners in this way is unclear. This study examines 90 target words occurring in single seasons of television, and the degree to which their aural occurrence matched the presentation of a potentially supporting image. Results indicate differences in the way imagery supports potential vocabulary learning in documentary television compared with narrative television, and that this supporting imagery occurred concurrently with the aural form more often in documentary television. Research and pedagogical implications are discussed in detail.
Article
This chapter updates the dual coding theory (DCT) of the memory systems of bilingual (and multilingual) individuals. DCT is a particular variant of multiple storage views of memory that contrast with common coding (single store) views. © Springer Science+Business Media, LLC 2014. All rights are reserved.
Article
This study examines how three captioning types (i.e., on-screen text in the same language as the video) can assist L2 learners in the incidental acquisition of target vocabulary words and in the comprehension of L2 video. A sample of 133 Flemish undergraduate students watched three French clips twice. The control group (n = 32) watched the clips without captioning; the second group (n = 30) watched fully captioned clips; the third group (n = 34) watched keyword captioned clips; and the fourth group (n = 37) watched fully captioned clips with highlighted keywords. Prior to the learning session, participants completed a vocabulary size test. During the learning session, they completed three comprehension tests; four vocabulary tests measuring (a) form recognition, (b) meaning recognition, (c) meaning recall, and (d) clip association, which assessed whether participants associated words with the corresponding clip; and a final questionnaire. Our findings reveal that the captioning groups scored equally well on form recognition and clip association and significantly outperformed the control group. Only the keyword captioning and full captioning with highlighted keywords groups outperformed the control group on meaning recognition. Captioning did not affect comprehension nor meaning recall. Participants’ vocabulary size correlated significantly with their comprehension scores as well as with their vocabulary test scores.
Article
This study reports on a meta-analysis of the effectiveness of captioned video (i.e. L2 video with L2 subtitles) for listening comprehension and vocabulary learning in the context of second language acquisition. The random-effects meta-analysis provides a quantitative measure of the overall effect of captions on listening comprehension and vocabulary acquisition, as well as an investigation into the relationship between captioning effectiveness and two potential moderators: test type and proficiency level. We conducted a systematic review and calculated effect sizes for 18 retrieved studies. Separate meta-analyses were performed for listening comprehension (including data of 15 studies) and for vocabulary learning (including data from 10 studies). The findings indicate a large effect of captions on listening comprehension as well as on vocabulary acquisition. Test type was found to moderate the effect sizes of listening comprehension. Proficiency level did not moderate the effect sizes of listening comprehension and vocabulary learning. The article concludes with a contextualized discussion of the results and an overview of the limitations of the present meta-analysis as well a number of future research perspectives.
Article
As in any field of scientific inquiry, advancements in the field of second language acquisition (SLA) rely in part on the interpretation and generalizability of study findings using quantitative data analysis and inferential statistics. While statistical techniques such as ANOVA and t-tests are widely used in second language research, this review article provides a review of a class of newer statistical models that have not yet been widely adopted in the field, but have garnered interest in other fields of language research. The class of statistical models called mixed-effects models are introduced, and the potential benefits of these models for the second language researcher are discussed. A simple example of mixed-effects data analysis using the statistical software package R (R Development Core Team, 2011) is provided as an introduction to the use of these statistical techniques, and to exemplify how such analyses can be reported in research articles. It is concluded that mixed-effects models provide the second language researcher with a powerful tool for the analysis of a variety of types of second language acquisition data.
Article
The abstract for this document is available on CSA Illumina.To view the Abstract, click the Abstract button above the document title.
Article
The scripts of 318 movies were analyzed in this study to determine the vocabulary size necessary to understand 95% and 98% of the words in movies. The movies consisted of 2,841,887 running words and had a total running time of 601 hours and 33 minutes. The movies were classified as either American or British, and then put into the following genres: action, animation, comedy, suspense/crime, drama, horror, romance, science fiction, war, western, and classic. The results showed that knowledge of the most frequent 3,000 word families plus proper nouns and marginal words provided 95.76% coverage, and knowledge of the most frequent 6,000 word families plus proper nouns and marginal words provided 98.15% coverage of movies. Both American and British movies reached 95% coverage at the 3,000 word level. However, American movies reached 98% coverage at the 6,000 word level while British movies reached 98% coverage at the 7,000 word level. The vocabulary size necessary to reach 95% coverage of the different genres ranged from 3,000 to 4,000 word families plus proper nouns and marginal words, and 5,000 to 10,000 word families plus proper nouns and marginal words to reach 98% coverage. The implications for teaching and learning with movies are discussed in detail.
Book
Please do not request a full text copy of this book, I find such requests very disrespectful. Thanks for your understanding. I have a lot of free material on my website and youTube channel.
Article
When foreign movies are subtitled in the local language, reading subtitles is more or less obligatory. Our previous studies have shown that knowledge of the foreign language or switching off the sound track does not affect the total time spent in the subtitled area. Long-standing familiarity with subtitled movies and processing efficiency have been suggested as explanations. Their effects were tested by comparing American and Dutch-speaking subjects who differ in terms of subtitling familiarity. In Experiment 1, American subjects watched an American movie with English subtitles. Despite their lack of familiarity with subtitles, they spent considerable time in the subtitled area. Accordingly, subtitle reading cannot be due to habit formation from long-term experience. In Experiment 2, a movie in Dutch with Dutch subtitles was shown to Dutch-speaking subjects. They also looked extensively at the subtitles, suggesting that reading subtitles is preferred because of efficiency in following and understanding the movie. However, the same findings can also be explained by the more dominant processing of the visual modality. The proportion of time spent reading subtitles is consistently larger with two-line subtitles than with one-line subtitles. Two explanations are provided for the differences in watching one- and two-line subtitles: (a) the length expectation effect on switching attention between picture and text and (b) the presence of lateral interference within two lines of text.
Article
In this study, the scripts of 288 television episodes were analyzed to determine the extent to which vocabulary reoccurs in related and unrelated television programs, and the potential for incidental vocabulary learning through watching one season (approximately 24 episodes) of television programs. The scripts consisted of 1,330,268 running words and had a total running time of 203 hours and 49 minutes with a mean running time of 42 minutes. The vocabulary from a single season of six individual television programs (142 episodes) was compared with six sets of random television programs (146 episodes). The results indicated that, when there are an equivalent number of running words, related television programs are likely to contain fewer word families than unrelated programs. The findings also indicated that word families from the 4,000–14,000 levels were more likely to reoccur in a complete season of a television program than in random television programs. The percentage of low-frequency word families encountered 10 or more times was higher, and the percentage of word families encountered once was fewer in all six programs than in the random television programs.
Article
The purpose of this study was to examine the effects of using Spanish captions, English captions, or no captions with a Spanish-language soundtrack on intermediate university-level Spanish as a Foreign Language students' comprehension of DVD passage material. A total of 169 intermediate (fourth-semester)students predicated as intact groups in the study. The passage material consisted of a 7-minute DVD episode about preparation for the Apollo 13 space-exploration mission. The students viewed only one of three passage treatment conditions: Spanish captions, English captions, or no captions. The English-language-dependent measures consisted of a written summary generated by the students and a 10-item multiple-choice test. The statistically significant results revealed that the English captions group performed at a substantially higher level than the Spanish captions group, which in turn performed at a considerably higher level than the no captions group on both dependent measures. The pedagogical value of using multilingual soundtracks and multilingual captions in various ways to enhance second language reading and listening comprehension is discussed.
Article
This article reports a reliability study of two versions of the Vocabulary Levels Test at the 5000 word level. This study was motivated by a finding from an ongoing longitudinal study of vocabulary acquisition that Version A and Version B of Vocabulary Levels Test at the 5000 word level were not parallel. In order to investigate this issue, Versions A and B were combined to create a single instrument. This was administered at one time to discover whether score differences found in the longitudinal study were present once the variable of time was removed. The data was analysed using correlation, and in order to discover if there was a significant difference between the two means of Version A and Version B, a t-test was used. Following that, a further examination of item facility values was conducted. The data analysis showed that Version A and Version B at the 5000 were highly correlated and highly reliable. However, the item analysis shows that the facility values of Version B contain a number of more difficult items. While versions of the Vocabulary Levels Tests at the 2000, 3000 and Academic levels may be treated as parallel for longitudinal studies, this does not hold at the 5000 word level. We suggest changes that need to be made to the test before it is used in future longitudinal vocabulary growth studies.
Book
This work presents a systematic analysis of the psychological phenomena associated with the concept of mental representations - also referred to as cognitive or internal representations. A major restatement of a theory the author of this book first developed in his 1971 book (Imagery and Verbal Processes), this book covers phenomena from the earlier period that remain relevant today but emphasizes cognitive problems and paradigms that have since emerged more fully. It proposes that performance in memory and other cognitive tasks is mediated not only by linguistic processes but also by a distinct nonverbal imagery model of thought as well. It discusses the philosophy of science associated with the dual coding approach, emphasizing the advantages of empiricism in the study of cognitive phenomena and shows that the fundamentals of the theory have stood up well to empirical challenges over the years.
Article
Simultaneous inference is a common problem in many areas of application. If multiple null hypotheses are tested simultaneously, the probability of rejecting erroneously at least one of them increases beyond the pre-specified significance level. Simultaneous inference procedures have to be used which adjust for multiplicity and thus control the overall type I error rate. In this paper we describe simultaneous inference procedures in general parametric models, where the experimental questions are specified through a linear combination of elemental model parameters. The framework described here is quite general and extends the canonical theory of multiple comparison procedures in ANOVA models to linear regression problems, generalized linear models, linear mixed effects models, the Cox model, robust linear models, etc. Several examples using a variety of different statistical models illustrate the breadth of the results. For the analyses we use the R add-on package multcomp, which provides a convenient interface to the general approach adopted here.
Range: A program for the analysis of vocabulary in texts (Version 3) [Computer software
  • I S P Nation
  • A Heatley
emmeans: Estimated marginal means, aka least-squares means
  • R Lenth
LMERConvenienceFunctions: Model selection and post-hoc analysis for (G)LMER models
  • A Tremblay
  • J Ransijn
Subtitle guidelines (Version1
  • Bbc
sjPlot: Data visualization for statistics in social science
  • D Lüdecke
SrtEdit (Version 6.3) [Computer software
  • Portablesoft
L1/L2 subtitled TV series and EFL learning: A study on vocabulary acquisition and content comprehension at different proficiency levels . ( Unpublished doctoral dissertation )
  • Gesa Vidal