Mean vocabulary pretests and posttests scores by subtitle group (max. score = 1). Lines in the boxes represent median scores; boxes range from the 25th to the 75th percentile; vertical lines range from the minimum to the maximum score, with the symbol ° represents outliers. Captions = L2 subtitles group; L1 = first language subtitles group; Bilingual = bilingual subtitles group; No = no subtitles group.

Mean vocabulary pretests and posttests scores by subtitle group (max. score = 1). Lines in the boxes represent median scores; boxes range from the 25th to the 75th percentile; vertical lines range from the minimum to the maximum score, with the symbol ° represents outliers. Captions = L2 subtitles group; L1 = first language subtitles group; Bilingual = bilingual subtitles group; No = no subtitles group.

Source publication
Article
Full-text available
This study examined the effectiveness of bilingual subtitles relative to captions, subtitles, and no subtitles for incidental vocabulary learning. Learners’ processing of novel words in the subtitles and its relationship to learning gains were also explored. While their eye movements were recorded, 112 intermediate to advanced Chinese learners of E...

Contexts in source publication

Context 1
... statistics for the pretest and posttest scores by item are provided in Appendix S7 in the online Supporting Information. Figure 2 shows that the posttest scores were in general higher than the pretest scores. In response to Research Question 1, we first checked whether all the groups had significantly improved their vocabulary knowledge after the treatment. ...
Context 2
... Figure 2 shows, the captions group obtained the highest mean posttest scores in the form recognition test, although the bilingual subtitles group achieved the highest mean scores for meaning recall and meaning recognition. In order to examine the relative effects of bilingual subtitles compared to other subtitling types, we constructed a series of logistic mixed-effects models. ...

Citations

... Captioning, therefore, serves a supporting role by offering people multiple representations of the same information (Teng, 2021). Fortunately, captioned videos are openly available today, with the (2) the impacts of incorporating advanced organizers to strengthen incidental vocabulary learning from captioned videos (Teng, 2019a(Teng, , 2022c; (3) frameworks or models related to captioned video adoption in an FL context (Teng, 2021;Vanderplank, 2016); (4) the roles of word-related factors, such as frequency, in incidental vocabulary learning from captioned videos (Majuddin et al., 2021;Teng, 2019b); (5) learner characteristics that may influence incidental vocabulary learning from such videos (Suárez & Gesa, 2019;Teng, 2022aTeng, , 2022bTeng & Mizumoto, 2023); and (6) these videos' effects, including in terms of bilingual subtitling, from an eye-tracking perspective (Montero Perez et al., 2015;Wang & Pellicer-Sánchez, 2022). This research agenda can familiarize scholars and classroom practitioners with how best to apply captioned videos for incidental vocabulary learning. ...
Article
Full-text available
In response to the recent surge of interest in incidental vocabulary learning, this article synthesizes ideas about such learning in practice. I specifically derive seven critical issues from studies on the topic. I also examine vocabulary learning through incidental means based on various input sources while considering frequency, context, motivation, and strategies and tasks to foster deeper mental processing and better retention. Findings can inform pedagogically sound guidelines for effective vocabulary instruction. Actionable suggestions are provided to enhance incidental vocabulary learning, given an understanding of relevant issues.
... One of the primary aims of eye-tracking, in the field of language learning, is to research comprehension and processing of written text (Keating, 2013;van Gompel et al., 2007). However, a range of studies on L2 listening tests Kho et al., 2022), L2 listening processing with subtitles (Wang & Pellicer-Sánchez, 2022), L2 word processing (Fernández & Jegerski, 2022;García-Castro, 2018;Huang et al., 2022;Yi & DeKeyser, 2022), L2 vocabulary learning (Wang & Pellicer-Sánchez, 2022), amongst many others, have been conducted and have shed light on L2 teaching and learning. Unfortunately, studies have not yet explored L2 teaching and learning processes within the population in Costa Rica. ...
... One of the primary aims of eye-tracking, in the field of language learning, is to research comprehension and processing of written text (Keating, 2013;van Gompel et al., 2007). However, a range of studies on L2 listening tests Kho et al., 2022), L2 listening processing with subtitles (Wang & Pellicer-Sánchez, 2022), L2 word processing (Fernández & Jegerski, 2022;García-Castro, 2018;Huang et al., 2022;Yi & DeKeyser, 2022), L2 vocabulary learning (Wang & Pellicer-Sánchez, 2022), amongst many others, have been conducted and have shed light on L2 teaching and learning. Unfortunately, studies have not yet explored L2 teaching and learning processes within the population in Costa Rica. ...
... Similarly, Kho et al. (2022) showed that gaze behavior, in university students in Asia, is related to participants' performance in listening tests. Eye movements have also been found to predict vocabulary gains, of Chinese learners of English, when watching a documentary with subtitles (Wang & Pellicer-Sánchez, 2022). Eye-tracking has been used to research various L2 learning populations; however, to the best of the researcher's knowledge, it has not been widely explored within the Costa Rican population. ...
Article
Full-text available
Research using eye-tracking technology has wildly increased in the last decade (Keating, 2013). Hence, studies on L2 teaching and learning have started to use that technology to improve research in that area. However, L2 eye-tracking research has not been explored in the Costa Rican context. This article aims to describe what eye-tracking is, what eye-tracking paradigms are, studies using eye-tracking in L2 teaching and learning, and research ideas for its implementation in Costa Rica. The methodology employed was an integrative review approach (Cooper, 1988) to review eye-tracking research and to discuss its application in Costa Rica. It was found that there are no studies regarding L2 teaching and learning in Costa Rica with the use of eye-tracking. Thus, research ideas to conduct eye-tracking research in L2 teaching and learning are discussed. It is concluded that the use of eye-tracking brings an opportunity to conduct multidisciplinary research to further advance the knowledge of L2 teaching and learning processes in Costa Rica.
... However, this approach requires advanced preprocessing. Wang and Pellicer-Sánchez [45] investigated the effectiveness of bilingual subtitles compared to captions, subtitles, and no subtitles using an eye-tracking study. Thereby they found that while bilingual captions lead to a higher meaning recognition, they can also be distracting as users tend to spend more time reading the translations than the new words in the target language. ...
Preprint
Full-text available
Captions provide language learners with a scaffold for comprehension and vocabulary acquisition. Past work has proposed several enhancements such as keyword highlights for increased learning gains. However, little is known about learners' experience with enhanced captions, although this is critical for adoption in everyday life. We conducted a survey and focus group to elicit learner preferences and requirements and implemented a processing pipeline for enhanced captions with keyword highlights, time-synchronized keyword highlights, and keyword captions. A subsequent online study (n = 49) showed that time-synchronized keyword highlights were the preferred design for learning but were perceived as too distracting to replace standard captions in everyday viewing scenarios. We conclude that keyword highlights and time-synchronization are suitable for integrating learning into an entertaining everyday-life activity, but the design should be optimized to provide a more seamless experience.
... In the context of vocabulary learning from viewing (i.e., watching audio-visual materials, such as television shows and movies), studies have shown that adult learners process the animation and on-screen text regardless of the language of the text, with similar processing patterns for first language (L1) subtitles (i.e., on-screen text in viewers' L1) and captions (i.e., on-screen text in the same language as the soundtrack; e.g., Bisson, et al., 2014 ). Empirical evidence has also suggested that early processing of unknown lexical items facilitates learners' knowledge of word form, whereas the predictive role of late measures is still unclear (e.g., Montero Perez et al., 2015 ;Wang & Pellicer-Sánchez, 2022b ). ...
... We present two worked examples using data collected by Wang (2022) on learning from subtitled viewing. Wang's (2022) research aimed to investigate the effects of different subtitling types (i.e., captions, L1 subtitles, and bilingual subtitles) on L2 learners' comprehension ( Wang & Pellicer-Sánchez, 2022a ) and incidental vocabulary learning ( Wang & Pellicer-Sánchez, 2022b ), and explored learners' engagement with unknown words during viewing using eye-tracking and stimulated recalls. In Wang's (2022) research, a number of unknown words from the video were selected as target words (TWs) and participants' prior knowledge of those words was tested by means of pre-tests. ...
... To analyse the eye-tracking data, dynamic interest areas covering the presentation time of each TW should be first created (for a more detailed explanation, see Wang & Pellicer-Sánchez, 2022b ). Then, researchers should select and export eye-tracking measures for all the unknown TWs. ...
... One way of doing this is to provide both types of captions together. A study by Wang and Pellicer-S anchez (2022) has confirmed that this leads to greater vocabulary gains than providing no onscreen text. Moreover, it brought about better results in their study than watching only with L2 captions, at least at the level of acquiring the meaning of words. ...
... Having no control condition and only one comparison condition are additional limitations. For example, it would be interesting to compare the sequential use and the simultaneous use of L1 captions and L2 captions that was tried by Wang and Pellicer-S anchez (2022). It may also be worth exploring other viewing sequences, including ones mentioned in the responses to the questionnaire (for instance, viewing first without captions). ...
... Thus, since the 1990s, researchers have been concerned with how viewing captioned and subtitled videos might address the need to acquire a substantial amount of L2 vocabulary [4][5][6][7]. Over time, many studies have shown that viewing captioned and subtitled videos not only enhances learners' comprehension but also facilitates language acquisition [8][9][10], and, more specifically, vocabulary acquisition [9,[11][12][13][14]. ...
... Researchers' interests in this area have led to investigations targeting numerous variables that may facilitate learning. For example, some studies compared the effect of different types of captioning and subtitling on vocabulary acquisition [14][15][16][17][18], intentional and incidental learning conditions [16,19,20], input medium [7,9,13,21], and vocabulary knowledge type [5,7,12,22], among others. This body of previous empirical studies has greatly contributed to the current understanding that viewing captioned and subtitled videos affects vocabulary acquisition. ...
... Captioning and subtitling have been shown to be effective ways of improving vocabulary acquisition for L2 learners [14,22]. In video viewing, captioning is commonly used when text functions as a service to aid hearing-impaired viewers. ...
Article
Full-text available
As access to video-viewing technology has increased, so has researchers’ interest in understanding how the viewing of captioned and subtitled videos can lead to effective vocabulary learning outcomes. Previously, there has been one meta-analysis on the effects of this type of video-viewing on vocabulary acquisition. However, the variables investigated and types of vocabulary knowledge analyzed were limited. To address these issues, we conducted a mixed review that combined a scoping review and meta-analysis. We identified 139 studies in major databases, of which 34 aligned with our inclusion criteria. Results from the scoping review found that researchers have assessed productive knowledge more than receptive knowledge, and knowledge of form and meaning more than knowledge of use. Participants were given TV series to view more than any other media type. Results from the meta-analysis found that viewing any type of captioned or subtitled videos had a positive effect on vocabulary acquisition. Among all the captioned and subtitled video types, viewing videos with intralingual captions had the largest effect on vocabulary learning outcomes. Furthermore, the viewing of animations had the largest effect on vocabulary learning outcomes compared with all the other types of video viewing investigated. No statistically significant difference between intentional or incidental learning conditions was found, indicating that both conditions are suitable for developing vocabulary learning through video viewing. Additional findings and implications for teaching and research are discussed.
... Subtitle is the interlingual on-screen text which provides L1 translation to the L2 soundtrack, and caption is the intralingual on-screen text which provides L2 verbatim transcription to the L2 soundtrack (Danan, 2004;Winke et al., 2010;Hsu et al., 2013). Dual subtitle is the one that combines L1 translation and L2 verbatim transcription simultaneously (Lwo and Lin, 2012;Hao et al., 2021;Wang and Pellicer-Sánchez, 2022). Full caption, another term for caption, is employed when it is discussed in the scope of caption modes to differ from keyword caption in particular. ...
... The first camp held that captions were superior to subtitles. Peters et al. (2016) carried out two experiments, respectively, on intermediate and low-proficiency English-as-a-foreign language (EFL) students to investigate the differential effects of subtitles Baranowska, 2020 Wang andPellicer-Sánchez, 2022 Camp 2 (Irrelevant to types of on-screen texts) Lwo and Lin, 2012Frumeselu, 2019Bisson et al., 2014Muñoz et al., 2021Vulchanova et al., 2015Birulés-Muntané and Soto-Faraco, 2016 Camp 3 (Subtitles were better) Hao et al., 2021 and captions. The two experiments almost arrived at the same conclusion that captions showed greater influence on word form than subtitles. ...
... The findings of the second group showed that L2 vocabulary learning had little to do with types of on-screen texts. Lwo and Lin (2012) were among the few researchers who introduced dual subtitles into their study, yet their result was totally different from that in Wang and Pellicer-Sánchez (2022). Instead of observing the positive impact of captions or dual subtitles, they found that neither the existence nor the types of on-screen texts exerted any influence on junior high school students' vocabulary recognition and use, which was ascribed to the excessive visual and auditory support that dwarfed the effects of on-screen texts in the teaching material. ...
Article
Full-text available
Audiovisual input has received increasing attention from the Second Language Acquisition (SLA) and the Computer-Assisted Language Learning (CALL) domains during the past few decades due to its vividness, authenticity, and easy accessibility. Videos with on-screen texts, as a widespread way of audiovisual input in second language (L2) teaching and learning, influence L2 learners’ performance in various aspects, including their vocabulary learning. The wide application and profound influence of such kind of input call for a systemic review on this important domain of research. Accordingly, this paper reviews the empirical studies on the effects of on-screen texts on L2 vocabulary learning. Specifically, it seeks to evaluate the role of different types of on-screen texts (i.e., subtitles, captions, and dual subtitles) and various modes of captions (i.e., full captions, keyword captions, glossed captions, annotated captions, and enhanced captions) in L2 vocabulary development. It also discusses other factors that concur with on-screen texts and influence L2 vocabulary gains from audiovisual input, such as learners’ vocabulary size, L2 proficiency, frequency of occurrence, number of viewing, instructional strategy, and test time. Finally, some suggestions are provided for future research.
... Another finding by Wang and Pellicer-Sánchez showed that bilingual subtitles were superior to other forms of subtitling in the acquisition of meaning, whereas they were less effective than captions for form recognition [11]. ...
... During the experiment, we asked each participant to watch a lecture by Professor Paul Bloom titled "Introduction to Psychology" [11]. It was dated January 22, 2018, and is available on Yale University's Open Yale Courses via Asuka Academy, a platform that offers dual subtitles in both Japanese and English. ...
Article
This paper reports on an eye-tracking study investigating the processing and mnemonic retention of reverse subtitles (foreign-language subtitles presented alongside native-language audio) in learners of Italian as a Foreign language (IFL). 26 English native speakers with a CEFR B2+ Italian level watched an English clip with Italian subtitles in two translation conditions, formal similarity (literal transfer) and formal discrepancy (non-literal transfer). Immediately after watching, they answered recognition and recall questions. This study examines memory, attention allocation and the concept of noticing, which was investigated through triangulation of eye tracking, verbatim recognition and explicit reports. Data analysis methods include generalised mixed-effect modelling. Results revealed that reverse subtitles have acquisitional potential for advanced IFL learners, noticing can be probed experimentally, and formal (dis)similarity appears to have some psychological reality in the mind of the learner, being able to affect both recognition and recall. Evidence of novel word learning as well as deepening of existing knowledge emerged from the analyses, supporting the view that reversed subtitles could be more fruitfully exploited in FLL contexts. The paper presents details of the data analyses, discusses them in relation to Second Language Acquisition (SLA) and psycholinguistic concepts, and draws some recommendations based on the findings.
Article
This article explores bilingual subtitling, a relatively under-researched mode of audiovisual translation, and its role in the ever-evolving landscape of global media streaming. Originally used for cinema productions in officially bilingual countries and international film festivals, bilingual subtitling has now resurfaced as a response to the growing affordances of streaming media. This article investigates the proliferation of bilingual subtitling tools and practices in different contexts, from PC-based tools and Chrome extensions that add bilingual subtitle features to streaming platforms (Netflix, YouTube) to amateur (optionally bilingual) subtitling streaming services (Viki Rakuten), video sharing websites (Bilibili), and online channels with open bilingual subtitles embedded in their videos (Easy Languages). Bilingual subtitling is further promoted as a pedagogical tool for foreign-language learning that matches the expectations of contemporary learners, especially ‘digital natives’ who have grown up with new online modalities. The conventional ways in which audiences used to engage with audiovisual content have, arguably, been superseded as streaming platforms that offer an abundance of options in terms of language and content are gradually reshaping viewing patterns. Shifting away from long-established patterns of passive TV consumption, this article also sets out to present online collaborations and initiatives that seek to incorporate bilingual subtitles in language learning while promoting the active participation of the audience within the emerging media streaming landscape.