Chapter

Psycholinguistics and audiovisual translation: Theoretical and methodological challenges

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

The exponential growth of Audiovisual Translation (AVT) in the last three decades has consolidated its place as an area of study within Translation Studies (TS). However, AVT is still a young domain currently exploring a number of different lines of inquiry without a specific methodological and theoretical framework. This volume discusses the advantages and drawbacks of ten approaches to AVT and highlights the potential avenues opened up by new methods. Our aim is to jumpstart the discussion on the (in)adequacy of the methodologies imported from other disciplines and the need (or not) for a conceptual apparatus and framework of analysis specific to AVT. This collective work relates to recent edited volumes that seek to take stock on research in AVT, but it distinguishes itself from those publications by promoting links in what is now a very fragmented field. Originally published as a special issue of Target 28:2 (2016).

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.
Conference Paper
Full-text available
In educational design literature, it is often taken as fact that subtitles increase cognitive load (CL). This paper investigates this assumption experimentally by comparing various measures of CL when students watch a recorded academic lecture with or without subtitles. Since the measurement of cognitive load is by no means a simple matter, we first provide an overview of the different measurement techniques based on causality and objectivity. We measure CL by means of eye tracking (pupil dilation), electroencephalography (EEG), self-reported ratings of mental effort, frustration, comprehension effort and engagement, as well as performance measures (comprehension test). Our findings seem to indicate that the subtitled condition in fact created lower CL in terms of percentage change in pupil diameter (PCPD) for the stimulus, approaching significance. In the subtitled condition PCPD also correlates significantly with participants' self-reported comprehension effort levels (their perception of how easy or difficult it was to understand the lecture). The EEG data, in turn, shows a significantly higher level of frustration for the unsubtitled condition. Negative emotional states could be caused by situations of higher CL (or cognitive overload) leading to learner frustration and dissatisfaction with learning activities and own performance [16]. It could therefore be reasoned that participants had a higher CL in the absence of subtitles. The self-reported frustration levels correlate with the frustration measured by the EEG as well as the self-reported engagement levels for the subtitled group. We also found a significant correlation between the self-reported engagement levels and both the short- and long term comprehension for the unsubtitled condition but not for the subtitled condition. There was no significant difference in either short-term or long-term performance measures between the two groups, which seems to suggest that subtitles at the very least, do not result in cognitive overload.
Article
Full-text available
Audiovisual material enhanced with captions or interlingual subtitles is a particularly powerful pedagogical tool which can help improve the listening comprehension skills of second-language learners. Captioning facilitates language learning by helping students visualize what they hear, especially if the input is not too far beyond their linguistic ability. Subtitling can also increase language comprehension and leads to additional cognitive benefits, such as greater depth of processing. However, learners often need to be trained to develop active viewing strategies for an efficient use of captioned and subtitled material. Multimedia can offer an even wider range of strategies to learners, who can control access to either captions or subtitles.
Article
Full-text available
Language teachers in the twenty-first century cannot ignore the possible benefits of using multimodal texts in the classroom. One such multimodal source that has been used extensively is subtitled videos. Against the background of conflicting theories in the fields of educational psychology and psycholinguistics as well as language acquisition where multimodal texts are concerned, this article presents an experiment aimed at determining the impact of competition between different sources of information on comprehension and attention allocation. The material that is investigated is a recorded and subtitled academic lecture in Economics with PowerPoint slides edited in, as an example of multisource communication. The article in particular engages with the issue of language as it pertains to the use of English as medium of instruction for English Second Language (ESL) students in South Africa. Essentially, the article seeks to shed light on the well documented positive effects of subtitles that are explained by the information delivery hypothesis and Dual Coding Theory, and the equally well documented negative impact explained by the redundancy effect in Cognitive Load Theory. Some evidence was found in the study that cognitive resources are assigned to more stable information sources like slides and non-verbal visual contextual information when the presentation speed of subtitles increases. This means that when the presentation speed of subtitles increases, learners focus on stable textual information (like slides) and on nonverbal information (like the face of the lecturer). Using the correct presentation speed of subtitles in multisource information in an educational setting is imperative for the activation of the potential benefits of multisource communication (that includes subtitles) for learning. The findings of the study stand to benefit all fields of multimedia educational design, but also have direct relevance to the use of technological support such as subtitles in the classroom.
Article
Full-text available
This article presents an experimental study to investigate whether subtitle reading has a positive impact on academic performance. In the absence of reliable indexes of reading behavior in dynamic texts, the article first formulates and validates an index to measure the reading of text, such as subtitles on film. Eye-tracking measures (fixations and saccades) are expressed as functions of the number of standard words and word length and provide a reliable index of reading behavior of subtitles over extended audiovisual texts. By providing a robust index of reading over dynamic texts, this article lays the foundation for future studies combining behavioral measures and performance measures in fields such as media psychology, educational psychology, multimedia design, and audiovisual translation. The article then utilizes this index to correlate the degree to which subtitles are read and the performance of students who were exposed to the subtitles in a comprehension test. It is found that a significant positive correlation is obtained between comprehension and subtitle reading for the sample, providing some evidence in favor of using subtitles in reading instruction and language learning. The study, which was conducted in the context of English subtitles on academic lectures delivered in English, further seems to indicate that the number of words and the number of lines do not play as big a role in the processing of subtitles as previously thought but that attention distribution across different redundant sources of information results in the partial processing of subtitles.
Article
Full-text available
In this paper we address the question whether shot changes trigger the re-reading of subtitles. Although it has been accepted in the professional literature on subtitling that subtitles should not be displayed over shot changes as they induce subtitle re-reading, support for this claim in eye movement studies is difficult to find. In this study we examined eye movement patterns of 71 participants watching news and documentary clips. We analysed subject hit count, number of fixations, first fixation duration, fixation time percent and transition matrix before, during and after shot changes in subtitles displayed over a shot change. Results of our study show that most viewers do not re-read subtitlescrossing shot changes.
Article
Full-text available
Eye movements of children (Grade 5-6) and adults were monitored while they were watching a foreign language movie with either standard (foreign language soundtrack and native language subtitling) or reversed (foreign language subtitles and native language soundtrack) subtitling. With standard subtitling, reading behavior in the subtitle was observed, but there was a difference between one- and two-line subtitles. As two lines of text contain verbal information that cannot easily be inferred from the pictures on the screen, more regular reading occurred; a single text line is often redundant to the information in the picture, and accordingly less reading of one-line text was apparent. Reversed subtitling showed even more irregular reading patterns (e.g., more subtitles skipped, fewer fixations, longer latencies). No substantial age differences emerged, except that children took longer to shift attention to the subtitle at its onset, and showed longer fixations and shorter saccades in the text. On the whole, the results demonstrated the flexibility of the attentional system and its tuning to the several information sources available (image, soundtrack, and subtitles).
Article
Full-text available
When foreign movies are subtitled in the local language, reading subtitles is more or less obligatory. Our previous studies have shown that knowledge of the foreign language or switching off the sound track does not affect the total time spent in the subtitled area. Long-standing familiarity with subtitled movies and processing efficiency have been suggested as explanations. Their effects were tested by comparing American and Dutch-speaking subjects who differ in terms of subtitling familiarity. In Experiment 1, American subjects watched an American movie with English subtitles. Despite their lack of familiarity with subtitles, they spent considerable time in the subtitled area. Accordingly, subtitle reading cannot be due to habit formation from long-term experience. In Experiment 2, a movie in Dutch with Dutch subtitles was shown to Dutch-speaking subjects. They also looked extensively at the subtitles, suggesting that reading subtitles is preferred because of efficiency in following and understanding the movie. However, the same findings can also be explained by the more dominant processing of the visual modality. The proportion of time spent reading subtitles is consistently larger with two-line subtitles than with one-line subtitles. Two explanations are provided for the differences in watching one- and two-line subtitles: (a) the length expectation effect on switching attention between picture and text and (b) the presence of lateral interference within two lines of text.
Article
Full-text available
Foreign language (FL) films with subtitles are becoming increasingly popular, and many European countries use subtitling as a cheaper alternative to dubbing. However, the extent to which people process subtitles under different subtitling conditions remains unclear. In this study, participants watched part of a film under standard (FL soundtrack and native language subtitles), reversed (native language soundtrack and FL subtitles), or intralingual (FL soundtrack and FL subtitles) subtitling conditions while their eye movements were recorded. The results revealed that participants read the subtitles irrespective of the subtitling condition. However, participants exhibited more regular reading of the subtitles when the film soundtrack was in an unknown FL. To investigate the incidental acquisition of FL vocabulary, participants also completed an unexpected auditory vocabulary test. Because the results showed no vocabulary acquisition, the need for more sensitive measures of vocabulary acquisition are discussed. Finally, the reading of the subtitles is discussed in relation to the saliency of subtitles and automatic reading behavior.
Article
Full-text available
Two experiments examined the effect of single-modality (sound or text) and bimodal (sound and text) presentation on word learning, as measured by both improvements in spoken word recognition efficiency (long lag repetition priming) and recognition memory. Native and advanced nonnative speakers of English were tested. In Experiment 1 auditory lexical decisions on familiar words were equally primed by prior bimodal and sound-only presentation, whereas there were no priming effects for nonwords. Experiment 2 employed a rhyme judgment task using nonwords. Repetition priming of auditory rhyme judgment decisions was now obtained, and this was greater in the bimodal than the sound-only condition. In both experiments prior bimodal presentation improved recognition memory for spoken words and nonwords compared to single modality presentation. We conclude that simultaneous text presentation can aid novel word learning under certain conditions, as assessed by both explicit and implicit memory tests.
Chapter
In daily life people are often confronted with more than one source of information at a time, as, for example, when watching television. A television program has at least two channels of information: a visual one (the image) and an auditory one (the sound). In some countries most of the television programs are imported from abroad and subtitled in the native language. The subtitles, then, are a third source of information. Characteristically, each of these three sources of information are partly redundant: they do not contradict but rather supplement one another, or express the same content in a different form.
Article
The advent of neuroimaging opened new research perspectives for the psycholinguist as it became possible to look at the neuronal mass activity that underlies language processing. Studies of brain correlates of psycholinguistic processes can complement behavioural results, and in some cases can lead to direct information about the basis of psycholinguistic processes. Even more importantly, the neuroscience move in psycholinguistics made it possible to advance language theorising to the level of the brain. This article discusses neurophysiological imaging with electroencephalography and magnetoencephalography. It examines behavioural and neurophysiological evidence in psycholinguistic research, focusing on lexical class membership and word frequency. The article also considers event-related potentials indicating language processing, early and late language potentials and their implications for psycholinguistics, the universe of psycholinguistic variables and its neurophysiological reality, and laterality of neurophysiological activity interpreted as the critical brain feature of language.
Article
Our work serves as an assay of the visual impact of text chunking on live (respoken) subtitles. We evaluate subtitles constructed with different chunking methods to determine whether segmentation influences comprehension or otherwise affects the viewing experience. Disparities in hearing participants' recorded eye movements over four styles of subtitling suggest that chunking reduces the amount of time spent reading subtitles.
Article
Ever since Karen Price's ground-breaking work in 1983, we have known that same-language subtitles (captions) primarily intended for the deaf and hearing-impaired can provide access to foreign language films and TV programmes which would otherwise be virtually incomprehensible to non-native-speaker viewers. Since then, researchers have steadily built up our knowledge of how learners may make use of these when watching. The question remains, however, whether, and to what extent, watching subtitled programmes over time helps develop learners’ language skills in various ways. Perhaps surprisingly, this question of long-term language development has still not been fully addressed in the research literature and we appear to be in a largely ‘confirmatory’ cycle. At an informal level, on the other hand, there are countless stories of learners who have been assisted in learning a foreign language by watching subtitled or captioned films and television. I shall review the contributions of key research studies to build up a picture of the current state of our knowledge and go on to outline, first, the current gaps in research and, second, some encouraging new approaches to learning by autonomous ‘users’ of foreign-language Internet media and same-language subtitles across languages, now more widely available.
Article
This study examined the effects of captioned video material on ESL student comprehension with videotaped episodes presenting both low and high audio/video correlation as defined by Garza (1991). Prior research has been restricted to high audio/video correlation material in which the audio track was strongly supported by the video portion (visual images). A total of thirty-seven advanced and thirty-four intermediate ESL students participated in the experiment. The results revealed that both groups were able to recall significantly more idea units ( p < .01) when the captions were available with the episode presenting a low level of visual support (low audio/video correlation). Conversely, caption availability did not substantially improve student recall with the episode presenting a high audio/video correlation.
Article
In this study, fifteen European learners of English, between high-intermediate and post-proficiency level, watched nine hour-long sessions of BBC general output television programmes with CEEFAX English language subtitles. The aim of the study was to investigate the potential benefits to be gained in terms of language learning from watching sub-titled programmes. The subjects provided detailed feedback on language gained from the programmes, on their reactions to the sub-titles, on strategies used in exploiting the sub-titles, on levels of anxiety, on the comprehensibility of the sound and text, and on the programmes themselves. The subjects also undertook a limited number of language-oriented activities connected with the programmes. Subjects reported that they found the sub-titles useful and beneficial to their language development and that they were able to develop strategies and techniques for using sub-titles flexibly and according to need. The findings suggested that sub-titled programmes may be of limited value for low-level learners, but may provide large amounts of comprehensible input for post-intermediate-level learners. The findings also indicated that sub-titles promote a low affective filter, encourage conscious language learning in 'literate' learners, and, paradoxically, release spare language-processing capacity.
Article
In an experimental study, we analyzed the cognitive processing of a subtitled film excerpt by adopting a methodological approach based on the integration of a variety of measures: eye-movement data, word recognition, and visual scene recognition. We tested the hypothesis that the processing of subtitled films is cognitively effective: It leads to a good understanding of film content without requiring a significant tradeoff between image processing and text processing. Following indications in the psycholinguistic literature, we also tested the hypothesis that two-line subtitles whose segmen-tation is syntactically incoherent can have a disruptive effect on information processing and recognition performance. The results highlighted the effectiveness of subtitle processing: Regardless of the quality of line segmentation, participants had a good understand-ing of the film content, they achieved good levels of performance in both word and scene recognition, and no tradeoff between text and image processing was detected. Eye-movement analyses enabled a further characterization of cognitive processing during subtitled film viewing. This article discusses the theoretical im-plications of the findings for both subtitling and multiple-source communication and highlights their methodological and applied implications.
Article
ABSTRACT As increasing numbers of foreign language programs begin to integrate video materials into their curricula, more attention is being focused on ways and means to optimize the student's comprehension of the language of film and television segments. This article reports on the results of research conducted to evaluate the use of captioning (on-screen target language subtitles) as a pedagogical aid to facilitate the use of authentic video materials in the foreign language classroom, especially in advanced or upper-level courses. Using Russian and ESL as target languages, the data collected strongly support a positive correlation between the presence of captions and increased comprehension of the linguistic content of the video material, suggesting the use of captions to bridge the gap between the learner's competence in reading and listening. The paper includes a detailed description of the research methodology, implementation, data analysis, and conclusions. A discussion of the results and suggestions for further research are also included.
Article
ABSTRACT  The purpose of this study was to examine the effects of captioned videotapes on advanced, university-level ESL students' listening word recognition. A total of 118 ESL students participated in the study. The videotaped materials consisted of episodes from two separate educational television programs concerning whales and the civil rights movement. The results for both passages revealed that the availability of captions significantly improved the ESL students' ability to recognize words on the videotapes that also appeared on the subsequent listening-only (listening stems and alternatives) multiple-choice tests. Recommendations for using captions to enhance second language student listening and reading comprehension are included.
Article
In this article, a study is described which aimed at gaining insights into how learners of English may best make use of television programmes with uni-lingual sub-titles intended for the deaf and hard-of-hearing. While these sub-titles do appear to make a wide variety of programmes accessible that would otherwise be impossible or at least very difficult to follow, there are several limitations to their value as a language learning resource. The findings of the study also suggest that a complex model is needed to capture the processes observed in and reported by learners who make use of the extensive comprehensible input available in television programmes.
Article
Based on cognitive load theory, this study investigated the effect of simultaneous written presentations on comprehension of spoken English as a foreign language. Learners' language comprehension was compared while they used 3 instructional formats: listening with auditory materials only, listening with a full, written script, and listening with simultaneous subtitled text. Listening with the presence of a script and subtitles led to better understanding of the scripted and subtitled passage but poorer performance on a subsequent auditory passage than listening with the auditory materials only. These findings indicated that where the intention was learning to listen, the use of a full script or subtitles had detrimental effects on the construction and automation of listening comprehension schemas.
The Impact of Translation Strategies on Subtitle Reading
  • Elisa Ghia
Ghia, Elisa. 2012. "The Impact of Translation Strategies on Subtitle Reading. " In Eye Tracking in Audiovisual Translation, ed. by Elisa Perego, 155-182. Roma: Aracne Editrice.
Eye Tracking with Text
  • Gregory D Keating
Keating, Gregory D. 2014. "Eye Tracking with Text. " In Research Methods in Second Language Psycholinguistics, ed. by Jill Jegerski, and Bill VanPatten, 69-92. New York: Routledge.
The Psycholinguistics of SLA
  • Bill Vanpatten
VanPatten, Bill. 2014. "The Psycholinguistics of SLA. " In Research Methods in Second Language Psycholinguistics (Second Language Acquisition Research Series), ed. by Jill Jegerski, and Bill VanPatten, 1-19. New York: Routledge.
Factors Influencing the Use of Captions by Foreign Language Learners: An Eye Tracking Study
  • Paula Winke
  • Susan Gass
  • Tetyana Syderenko
Winke, Paula, Susan Gass, and Tetyana Syderenko. 2013. "Factors Influencing the Use of Captions by Foreign Language Learners: An Eye Tracking Study. " The Modern Language Journal 97 (1): 254-275. doi: 10.1111/j.1540-4781.2013.01432.x