Conference PaperPDF Available


Sheila Castilho, Federico Gaspari, Joss Moorkens, Andy Way
ADAPT Centre, Dublin City University (IRELAND)
This paper presents TraMOOC (Translation for Massive Open Online Courses), a European research
project developed with the intention of empowering international learners in the digital multilingual
world by providing reliable machine translation (MT) specifically tailored to MOOCs from English into
11 languages (Bulgarian, Chinese, Croatian, Czech, Dutch, German, Greek, Italian, Polish,
Portuguese, and Russian). The paper describes how the project is addressing the challenges involved
in developing an innovative, high-quality MT service for producing accurate translations of
heterogeneous multi-genre MOOC materials, encompassing subtitles of video lectures, assignments,
tutorials, and social web text posted on student blogs and fora. Based on the results of a large-scale
and multi-method evaluation conducted as part of the TraMOOC project, we offer a reflection on how
to best integrate state-of-the-art MT into MOOC platforms. The conclusion summarizes the key
lessons learned, that can be applied by the wider community of international professionals with an
interest in the multilingual aspects of innovative education and new learning technologies.
Keywords: MOOCs, machine translation (MT), translation, e-learning, distance learning.
1.1 Background and motivation of the study
Massive Open Online Courses (MOOCs) offer valuable learning opportunities in several disciplines to
many students, to a large extent regardless of their background, location, and personal circumstances
[1]. Views about the actual potential of MOOCs inevitably vary, mostly depending on the subjects
being taught and on the pedagogic attitudes of the instructors ([2], [3], [4] and [5]), but MOOCs are
gradually starting to have an impact on teaching practice, at least for some disciplines (see, e.g., [6]).
One widely held view is that MOOCs may represent effective means of disseminating knowledge and
training to disadvantaged communities or individual students living in remote areas, with limited or no
access to traditional teaching and learning facilities, such as colleges, public libraries, qualified
teaching staff, technical equipment or laboratories [7]. However, rather surprisingly, there is growing
evidence that MOOC participants are in fact predominantly already qualified professionals from
privileged backgrounds mostly based in high-income, industrialized countries (e.g. [8], [9] and [10]).
One explanation of this seeming failure of MOOCs’ original intended mission of broadening access to
education and training is that this disappointing situation hinges significantly on language-related
limitations. MOOCs are typically available in one language that is shared between tutors and students,
which has the added bonus of enabling interactions on social platforms and fora accompanying formal
instruction [11]. However, language barriers impede broad use of high-quality MOOC materials across
national and language boundaries, severely limiting peer-to-peer as well as student-instructor
interactions alongside the more formal components of MOOC-based instruction: English is often
chosen as the common language of MOOCs with international reach; this, however, is far from ideal,
especially because it prevents large groups of potential users from fully engaging in a fulfilling MOOC
experience, thus wasting precious learning opportunities for innumerable motivated students around
the world. In an increasingly globalized and mobile society, in which academic institutions as well as
individual trainers are under growing pressure to seize the opportunities offered by internationalization,
there is a strong need for high-quality digital teaching and learning resources to be distributed across
linguistic and cultural boundaries [12].
Against this background, the paper reports the experience of the international research project
TraMOOC (Translation for Massive Open Online Courses, whose official website can be visited at The paper is structured as follows: after these introductory remarks on the
background and motivation of the study, Section 1.2 provides more detail on the project, emphasizing
its aims and expected outcomes. Section 2 describes the evolution of the main approaches to MT
system design, from the traditional rule-based architecture to the more recent statistical and neural
Proceedings of EDULEARN17 Conference
3rd-5th July 2017, Barcelona, Spain
ISBN: 978-84-697-3777-4
paradigms, that are now competing to be recognized as the state-of-the-art. Section 3 discusses the
application of MT for MOOCs, highlighting the difficulties inherent in the types of texts that form a
MOOC, and Section 4 details our development and evaluation of MT systems within the TraMOOC
project. Finally, Section 5 concludes by summarizing the key lessons learned from this work that can
be useful to the wider community of instructors and institutions interested in delivering innovative and
effective education opportunities via MOOCs to multilingual students, also outlining some possibilities
for future work in the rapidly evolving area at the crossroads of MOOCs and MT.
1.2 The TraMOOC project: aims and expected outcomes
One issue that cuts across all MOOCs with significant impact on their uptake and effectiveness is that
of the language(s) of instruction: this, in itself, is a crucial factor in restricting or, on the contrary,
widening access to education and training delivered via MOOCs [13]. Making MOOC contents
available in multiple languages has obvious benefits, and there have already been attempts to support
language diversity within MOOCs with a European focus [14]. In the ambitious attempt to address the
numerous and complex challenges entailed by this endeavour, TraMOOC aims at developing high-
quality MT of the multifarious text genres typically included in MOOCs from English into 9 European
(i.e. Bulgarian, Croatian, Czech, Dutch, German, Greek, Italian, Polish and Portuguese) and 2 so-
called BRIC languages (namely, Chinese and Russian). While these diverse target languages
constitute strong use cases in the MOOC space, some of them have been proven difficult to translate
into, which is further compounded by the weak or fragmentary support in terms of language resources
and processing tools that are required to build some of the relevant MT systems. This scenario poses
significant research and development challenges to the TraMOOC project consortium.
The main outcome of the project lies in the development of a high-quality semi-automated MT platform
for all types of textual data normally encountered in MOOCs, which typically range from subtitles of
video lectures to instructions for completing assignments, presentation slides, posts shared on student
blogs and comments sent to course fora. The core of the final service will be open-source and some
premium add-on services are expected to be commercialized, including MT support for additional
target languages of interest to the users, MT post-editing, transcription and subtitling of video-based
course contents, as well as professional translation. The ultimate goal is to turn the MOOC translation
service into a platform enabling the integration of any MT system chosen by the users, for any desired
language, for the educational domain.
MT has made substantial progress over the course of its history. Until the mid-1990s, rule-based MT
systems were the norm: these required significant investments and huge resources to be built,
including skilled computational linguists and programmers. This meant that MT systems were
available only for a limited number of well-resourced languages with substantial commercial interest.
In the late 1990s, a new data-driven paradigm emerged in MT system development, namely statistical
MT (SMT), which quickly became the dominant approach in both research and market-oriented
commercial applications. The principle underlying this approach is to do away with explicit linguistic
rules altogether. In contrast, translation patterns (i.e. correspondences between phrases in the source
and in the target languages) are inferred automatically from the analysis of parallel corpora, i.e. huge
collections of sentence-aligned professional (i.e. human-quality) translations. SMT systems estimate
the degree of probability for the correspondence of short bilingual chunks of text extracted from the
analysis of the parallel corpora, and subsequently generate the output in the target language based on
complex statistical calculations.
SMT systems can be built much faster and at a fraction of the cost of traditional rule-based ones, for
many more language pairs, using open-source development toolkits, such as Moses [15]. In addition,
SMT systems can be customized much more effectively than rule-based ones to different domains
and text types. More recently, the neural approach has emerged as a promising further development
in MT system design, attracting interest not only from academic researchers, but also from players in
the language, translation and localization industry, because neural MT (NMT) systems have
outperformed SMT systems for a number of language pairs in recent comparative evaluations. Simply
put, NMT exploits neural networks and deep learning techniques drawn from artificial intelligence to
map entire sentences from the source to the target language all at once, instead of breaking them
down into smaller units (typically individual words, or fixed sequences of a few words), as is the case
in SMT. This offers some advantages, although it is still debated whether NMT is superior to SMT.
Several recent studies address the crucial issue of evaluating and improving the quality, effectiveness
and success of MOOCs (see, e.g., [16] and [17]), and research has also been devoted to evaluating
the level of engagement afforded by MOOCs (e.g. [18]). This body of work provides, either implicitly or
explicitly, indications concerning good practice [19]. What is conspicuously absent from this
substantial body of work is the language dimension of MOOC-based instruction, especially when, as is
often the case, MOOCs have the ambition of being delivered internationally, to course takers with
different linguistic and cultural backgrounds: this necessarily raises the issue of how to effectively
translate these digital teaching and learning resources, so that their eventual multilingual nature
contributes to their overall value for students, rather than detracting from it.
We regard this as a major gap in the MOOC literature, and contend that the language used to impart
knowledge and support interactions associated with MOOCs is a key factor in the quality,
effectiveness and success of learning experiences for international students, which should receive
more attention, and this paper wishes to represent a first step in this direction. The broad questions
addressed in the work reported here are whether the time is ripe for the integration of MT into
MOOCs, and how to best go about selecting the most effective MT solution for this purpose.
A particularly interesting application domain that has recently emerged for MT concerns user-
generated content (UGC) [20]. Successful techniques have been developed, for example, for the
domain adaption of MT systems to deal with user comments in the e-commerce scenario [21], with
several experiments showing the feasibility of this rather challenging task, even though it is certainly
hard to obtain high-quality MT output in this area. UGC is also found in typical MOOC data, and the
TraMOOC project aims at providing reliable MT for it, too, which is extremely challenging, because
UGC is often poorly formulated, with relatively frequent spelling mistakes and grammatical
inaccuracies, and more generally sub-standard, or non-conventional, language.
For the TraMOOC project, we undertook to evaluate which of the two leading approaches to MT
system design competing to be the state-of-the-art in the field, namely SMT or NMT (see Section 2), is
better suited to be integrated into a MOOC platform to effectively deliver digital learning resources
multilingually. The overall study is reported in more detail in [22].
The SMT and the NMT systems used for this evaluation were built using state-of-the-art procedures,
aimed at guaranteeing the highest possible quality; in particular, for the statistical approach, a phrase-
based architecture was used, while the NMT systems generally followed the settings of [23]. All the
systems were trained on a variable mix of general, i.e. out-of-domain, and in-domain educational data,
due to the different resources available for each language combination. The general training data
ranged from a minimum of 21.30 million sentence pairs for EN-RU, to a maximum of almost 32 million
sentence pairs for EN-PT; the much smaller in-domain training data sets consisted of a minimum of
approximately 140000 sentence pairs for EN-EL, going up to 2.31 million sentence pairs for EN-RU.
Four sets of 250 English sentences each were translated into German, Greek, Portuguese and
Russian using the SMT and NMT systems. Our evaluation is based on a set of four widely used
automatic MT quality evaluation metrics: HTER (Translation Error Rate) [24], BLEU [25], METEOR
[26] and chrF [27]. For the human assessment, we have selected the following state-of-the-art metrics:
fluency and adequacy, post-editing, error annotation and ranking. Professional translators were asked
to rate the translations according to those metrics and to post-edit the sentences. These procedures
are widely used in the MT field in order to assess the quality of a given MT system.
The results of this large-scale evaluation, which are reported in full in [22], show that NMT receives
higher scores than SMT with all four automatic evaluation metrics (even though improvements for
Portuguese are very limited), and side-by-side ranking also shows a clear preference for NMT output
across the board, for all the language pairs and MOOC domains covered in this comparative study.
We can also conclude that NMT offers improvements in terms of fluency and word order errors over
SMT, mostly due to its better handling of word reordering. In addition, fewer sentences translated with
the NMT systems include errors, and NMT seems to perform better than SMT on morphologically rich
and highly inflected target languages.
In contrast, however, adequacy does not show marked improvements with NMT, and the situation is
mixed for errors of omission, addition and mistranslation, so much so that overall NMT does not entail
noticeable reductions in post-editing effort. Moreover, in-depth investigations of automatic MT
evaluation metric scores reveal that the performance of NMT tends to degrade for longer sentences
(more than 20 tokens), where SMT appears to be more reliable: the sentence length of the MOOC text
to be translated is one of the factors to be considered in order to decide which MT system provides the
best quality. Based on this evidence, for the final stages of the TraMOOC project the decision was
made to favour the NMT approach over SMT for the language pairs under consideration, as there are
indications that this approach holds the greatest potential for quality going forward. However, applying
MT to new language pairs and other MOOC domains may present different challenges, which is why
we are hesitant to make broader conclusive generalizations.
This paper has discussed the outcomes of a large-scale, multi-method evaluation, comparing the
quality of SMT and NMT output for MOOC data in a diverse set of language combinations of interest
to the TraMOOC project, i.e. from English into German, Greek, Portuguese and Russian. The
evaluation involved four state-of-the-art automatic evaluation metrics (i.e. HTER, BLEU, METEOR,
and chrF), as well as a range of more labour-intensive manual methods (fluency and adequacy, post-
editing, error annotation and ranking). In conclusion, consistently with other application domains, the
large-scale multi-method evaluations based on our MOOC data suggest that the emerging neural
approach to MT offers some noticeable advantages over the competing and well-established SMT
Our findings also show that, while NMT represents an improvement over SMT in some areas, further
work is still required to consolidate the current promising performance before NMT can be recognized
as the new state-of-the-art in MT [22]. As far as our own work in TraMOOC is concerned, subsequent
planned evaluations include the identification of source-language phenomena that are likely to cause
particularly serious errors in the output, depending on the target languages and the MT system type.
Another avenue for further research consists in task-based evaluations, which would provide a useful
addition to the range of evaluation methods that have already been applied in preparation for this
paper: task-based evaluation involves exposing real users to MOOC content machine-translated into
one of TraMOOC’s target languages, and then assessing their understanding and knowledge of that
translated material. Their performance can be judged against the baseline of students using the same
original English-language MOOC in preparation for identical tests, to give an indication of how
effective and successful the application of the MT system concerned is for the specific MOOC domain.
The application of MT to the various types of texts that incorporate a MOOC is undoubtedly a complex
task. At the end of the TraMOOC project we hope to provide a roadmap for using automatic translation
and user post-editing of MOOC materials, as well as a platform via which this work may be carried out
using state-of-the-art MT technology, as part of the ultimate aim of making MOOC resources more
accessible to non-English-speaking users. The results of our evaluations of NMT quality using MOOC
texts have so far been promising. This work is continuing at a larger scale, with results feeding back to
the MT development team, in the hope of facilitating multilingual MOOC resources that are
comprehensible and beneficial to global end users.
The TraMOOC project has received funding from the European Union’s Horizon 2020 research and
innovation programme under grant agreement 644333. The ADAPT Centre for Digital Content
Technology at Dublin City University is funded under the Science Foundation Ireland Research
Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development
[1] M. Nanfito, MOOCs: Opportunities, Impacts, and Challenges. Massive Open Online Courses in
Colleges and Universities. CreateSpace Independent Publishing Platform, 2014.
[2] S.D. Krause, and C. Lowe (eds), Invasion of the MOOCs: The Promises and Perils of Massive
Open Online Courses. Parlor Press, 2014.
[3] D.G. Glance, M. Forsey, and M. Riley, “The pedagogical foundations of massive open online
courses,” in First Monday, vol. 18, no. 5, 2013. Retrieved from
[4] R. Kop, H. Fournier, and J. Mak, “A pedagogy of abundance or a pedagogy to support human
beings? Participant support on massive open online courses,” in International Review of
Research in Open and Distance Learning, vol. 12, no. 7, pp.7493, 2011.
[5] J. Mackness, M. Waite, G. Roberts, and E. Lovegrove, “Learning in a small, task-oriented,
connectivist MOOC: Pedagogical issues and implications for higher education,” in The
International Review of Research in Open and Distance Learning, vol. 14, no. 4, 2013.
Retrieved from
[6] E.A. Monske, and K.L. Blair (eds), Handbook of Research on Writing and Composing in the Age
of MOOCs. IGI Global, 2017.
[7] B. Wildavsky, “MOOCs in the Developing World: Hope or Hype?,” in International Higher
Education, no. 80, pp. 2325, 2015. Retrieved from
[8] G. Christensen, A. Steinmetz, B, Alcorn, A. Bennett, D. Woods, and E.J. Emanuel, “The MOOC
Phenomenon: Who Takes Massive Open Online Courses and Why?,” in Social Science
Research Network, 2013. Retrieved from
[9] T. Liyanagunawardena, S. Williams, and A. Adams, “The impact and reach of MOOCs: a
developing country’s perspective,” in eLearning Papers, no. 33, 2013. Retrieved from
[10] D. Laurillard, “The educational problem that MOOCs could solve: professional development for
teachers of disadvantaged students,” in Research in Learning Technology, vol. 24, no. 1, 2016.
Retrieved from
[11] S. Mak, R. Williams, and J. Mackness, “Blogs and forums as communication and learning tools
in a MOOC,” in Proceedings of the 7th International Conference on Networked Learning 2010 (L.
Dirckinck-Holmfeld, V. Hodgson, C. Jones, M. de Laat, D. McConnell, and T. Ryberg, eds.), pp.
275284, 2010. Retrieved from
[12] C. Yeager, B. Hurley-Dasgupta, and C.A. Bliss, “cMOOCs and Global Learning: An Authentic
Alternative,” in Journal of Asynchronous Learning Networks, vol. 17, no. 2, pp. 133147, 2013.
Retrieved from
[13] T. Beaven, A. Cormas-Quinn, M. Hauck, B. de los Arcos, and T. Lewis, “The Open Translation
MOOC: creating online communities to transcend linguistic barriers,” in Journal of Interactive
Media in Education, 2013. Retrieved from
[14] F. Brouns, N. Serrano Martínez-Santos, J. Civera, M. Kalz, and A. Juan, “Supporting language
diversity of European MOOCs with the EMMA platform,” in Proceedings of the European MOOC
Stakeholder Summit 2015 Research Track, pp. 157165, 2015. Retrieved from
[15] P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen,
C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst, “Moses: Open Source
Toolkit for Statistical Machine Translation,” in Proceedings of the ACL 2007 Demo and Poster
Sessions, pp. 177180, 2008.
[16] M.J. Israel, “Effectiveness of Integrating MOOCs in Traditional Classrooms for Undergraduate
Students,” in The International Review of Research in Open and Distributed Learning, vol. 16,
no. 5, 2015. Retrieved from
[17] D. Gamage, I. Perera, and S. Fernando, “A framework to analyze effectiveness of eLearning in
MOOC: Learners’ perspective,” in Proceedings of the 8th International Conference on Ubi-Media
Computing (UMEDIA), Colombo, Sri Lanka: IEEE, 2015.
[18] C. Milligan, A. Littlejohn, and A. Margaryan, “Patterns of Engagement in Connectivist MOOCs,”
in Journal of Online Learning and Teaching, vol. 9, no. 2, pp. 149159, 2013. Retrieved from
[19] M. Bali, “MOOC Pedagogy: Gleaning Good Practice from Existing MOOCs,” in Journal of Online
Learning and Teaching, vol. 10, no. 1, pp. 4457, 2014. Retrieved from
[20] A. Way, “Traditional and Emerging Use-Cases for Machine Translation,” in Proceedings of
Translating and the Computer 35, 2013.
[21] M. Fernández-Barrera, V. Popescu, A. Toral, F. Gaspari, and K. Choukri, “Enhancing Cross-
border EU e-commerce through Machine Translation: Needed Language Resources,
Challenges and Opportunities,” in Proceedings of the 10th Language Resources and Evaluation
Conference, pp. 45504556, Paris: European Language Resources Association, 2016.
[22] S. Castilho, J. Moorkens, F. Gaspari, I. Calixto, J. Tinsley, and A. Way, “Is Neural Machine
Translation the New State of the Art?,” in The Prague Bulletin of Mathematical Linguistics, vol.
108, 2017.
[23] R. Sennrich, B. Haddow, and A. Birch, “Edinburgh Neural Machine Translation Systems for
WMT 16,” in Proceedings of the First Conferenc e on Machine Translation, vol. 2, Shared Task
Papers, pp. 371376, 2016.
[24] M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul, “A study of translation edit rate
with targeted human annotation,” in Proceedings of the Conference of the Association for
Machine Translation in the Americas, pp. 233231, 2006.
[25] K. Papineni, S. Roukos, T. Ward, and W. Zhu, “BLEU: A Method for Automatic Evaluation of
Machine Translation,” in Proceedings of the 40th Annual Meeting of the Association for
Computational Linguistics, pp. 311318, 2002.
[26] A. Lavie, and A. Agarwal, “METEOR: An Automatic Metric for MT Evaluation with High Levels of
Correlation with Human Judgments,” in Proceedings of the Workshop on Statistical Machine
Translation, pp. 228231, 2007.
[27] M. Popović, “chrF: character n-gram F-score for automatic MT evaluation,” in Proceedings of
the 10th Workshop on Statistical Machine Translation, pages 392395, 2015.
... As computer science was in its infancy at the time, the project was unable to take off. Over the years, the increase in computational power saw the development of machine translation with organizations like IBM, DARPA, SYSTRAN [2] (1968), Trados [3] performing computer aided translation. Sophisticated models for translation were being developed simultaneously. ...
Experiment Findings
The aim of the project was to measure impact of language similarity on the machine translation. We considered the perplexity value of English-German and English-French translations as a metric of similarity. LSTM based Sequence-to-Sequence model with attention mechanism was used to perform the translation on the WMT’15 Europarl corpus. The underlying assumption of the project was that the language similarity has a positive correlation with translation performance. We inferred from the results that language similarity has a more complex correlation with translational performance than we imagined.
... When users want more than a general idea of the source text, post-editing of MT becomes important. In the early days of MT, translators were mostly hostile to MT because of its low quality (Heyn, 1996), and assumptions that MT could replace human translators (Melby, 2002) limitations and overall well-being, and these factors were also influenced by the special institutional circumstances of the DGT. ...
As MOOCs (Massive Open Online Courses) grow rapidly around the world, the language barrier is becoming a serious issue. Removing this obstacle by creating translated subtitles is an indispensable part of developing MOOCs and improving accessibility. Given the large quantity of MOOCs available worldwide and the considerable demand for them, machine translation (MT) appears to offer an alternative or complementary translation solution, thus providing the motivation for this research. The main goal of this research is to test the impact machine translated subtitles have on Chinese viewers’ reception of MOOC content. More specifically, the author is interested in whether there is any difference between viewers’ reception of raw machine translated subtitles as opposed to fully post-edited machine translated subtitles and human translated subtitles. Reception is operationalized by adapting Gambier's (2007) model, which divides ‘reception’ into ‘the three Rs’: (i) response, (ii) reaction and (iii) repercussion. Response refers to the initial physical response of a viewer to an audio-visual stimulus, in this case the subtitle and the rest of the image. Reaction involves the cognitive follow-on from initial response, and is linked to how much effort is involved in processing the subtitling stimulus and what is understood by the viewer. Repercussion refers to attitudinal and sociocultural dimensions of AVT consumption. The research contains a pilot study and a main experiment. Mixed methods of eye-tracking, questionnaires, translation quality assessment and frequency analysis were adopted. Over 60 native Chinese speakers were recruited as participants for this research. They were divided into three groups, those who read subtitles created by raw MT, post-edited MT (PE) and human translation (HT). Results show that most participants had a positive attitude towards the subtitles regardless of their type. Participants who were offered PE subtitles scored the best overall on the selected reception metrics. Participants who were offered HT subtitles performed the worst in some of the selected reception metrics.
... However, some errors were found to be major (examples 3 and 5), as the message is understood with difficulty. Some other errors (examples 1,7,8,9,10,11,12) were found to be minor, as they do not affect the meaning intended in the ST. Overall, most errors were found to be minor. ...
Full-text available
This study aims at identifying the common types of errors in Google Translate (GT) in the translation of informative news texts from Arabic to English, to measure the translation errors quality and to assess the fluency and the semantic adequacy of the translation output, and therefore to explain the extent a human translator is needed to rectify the output translation. For this purpose, some examples were purposively selected from online newspapers. The collected data was analyzed using a mixed method approach, as the errors were qualitatively identified, guided by Hsu's (2014) classification of machine translation errors. Quantitative descriptive approach was used to measure the translation errors quality, using the Multidimensional Quality Metrics and Localization Quality Evaluation. As for assessing the semantic adequacy and fluency, a questionnaire that was adapted from Dorr, Snover, and Madnani (2011) was used. The results of the analysis show that omission, which is a lexical error and inappropriate lexical choice, which is a semantic error are the most common errors. Inappropriate lexical choice is sometimes a result of the homophonic nature of some source text words which can be misinterpreted by the machine translation system. This study concludes that it is useful to use machine translation systems to expedite the translation process, but that accuracy is sacrificed for the sake of ease (less work for the human) and speed of translation. If greater accuracy is required, or desired, a human translator must at least proofread and work on the material.
... FutureLearn with an English-only language of instruction policy that calls for all course communications to be conducted in English (Atenas, 2015). In response to reported language barrier problems, the Translation MOOC (TraMOOC) project (Sosoni, 2017;Castilho, Gaspari, Moorkens & Way, 2017), for example, demonstrates research into powerful translation support for the world's major languages or lingua francas. Research has also been carried out, with somewhat limited success so far, into automated essay scoring and calibrated peer review in MOOCs (Balfour, 2013). ...
Full-text available
A basic premise underpinning the new research paradigm presented in this thesis and demonstrated by the FLAX project (Flexible Language Acquisition, is that open data-driven language learning systems design as an approach is learner-centric and operates with the interface to the learner. Whether the learner is operating fully online in non-formal or informal learning mode or in a blended modality that is based both within and beyond the formal language classroom, this approach requires that the tools and interfaces, and indeed the corpora, be openly accessible and remixable for development or adaptation to meet this specific learner requirement. This method is different from existing data-driven learning (DDL) approaches which assume specialised knowledge or experience with DDL tools, interfaces and strategies, operating on mostly inaccessible corpora in terms of cost or design, or alternatively assuming training to, hopefully, compensate for this lack of knowledge and experience. From a research and development (R&D) standpoint, the paradigm presented here also operates with the interface to knowledge organisations (universities, libraries, archives) and researchers who are engaging with open educational practices to push at the parameters of open policy for the non-commercial reuse and remix of authentic research and pedagogic content that is increasingly abundant in digital open access format for text and data-mining (TDM) purposes. This open access content is highly relevant to learning features of specialist varieties of English from across the academy but is otherwise off limits for development into proprietary learning materials by the commercial education publishing industry. Indeed, the open corpus development work presented in this thesis would not have been possible had it not been for the campaigners for copyright reform, the Internet activists, the open policy makers, the open-source software developers, and the advocates for open access, open data and open education that have made these resources available for reuse and remix. This paradigm leads down several paths, including research into understanding how users actually perceive, appropriate and use the approach based on the open tools and resources provided. This inquiry informs their design and development, in an R&D process that is presented here through the methodological lens of design-based research and design ethnography. This approach will be fundamentally different than if we assume the user is actually a DDL or linguistics expert or that such an expert will be the learner's interface to the system, by preparing output for the learner to experience and learn from. This approach will also be necessarily different than if we assume the user is always a formally registered student at a university with access to English for Academic Purposes (EAP) support that may or may not offer DDL or linguistics expertise for learning the language features of specific discourse communities from across the academy. The assumption behind this new paradigm that the right tools and resources can allow the end-learner to drive the processes autonomously is fundamentally revolutionary. This premise goes to the original contribution to knowledge of this thesis, but also challenges and directs researchers and practitioners in the field to consider and take up this new direction with open data-driven language learning systems design for applications that can be scaled in higher education to meet the increasing numbers of learners who are coming online. The focus on domain-specific terminology learning support via data-driven approaches is of course also decidedly different from the current EAP paradigm which in mainstream practice has been steadily evolving away from its roots in English for Specific Purposes (ESP), domain specificity and DDL processes towards the generic skills and knowledge programs currently in vogue that are arguably being steered by generic EAP coursebook publications from the commercial education publishing industry. Thus, this is also a new paradigm based on DDL approaches, driving domain-specific terminology learning support for EAP across formal, non-formal and informal learning modalities in higher education. It will transform, potentially, the focus of DDL systems design developments in language support and learning in general toward the non-specialist end-learner, but also hopefully help re-establish the centrality of language specificity to the field of EAP. The new paradigm is necessarily rooted in greater inter- or multi-disciplinarity. Given the goal of facilitating, in particular, the increasing number of learners who are coming online, and users of large-scale MOOC platforms who are trying to function in domain-specific subject areas that are invariably offered in the English language, the approach requires collaboration and cooperation among platform providers, subject academics and instructors, educational technologists, software developers, educational researchers, EAP practitioners, linguists with expertise in corpus-based and DDL approaches, and policy makers in knowledge organisations (libraries, universities, archives). This doctoral thesis presents three studies in collaboration with the open source FLAX project. This research makes an original contribution to the fields of language education and educational technology by mobilising knowledge from computer science, corpus linguistics and open education, and proposes a new paradigm for open data-driven language learning systems design in higher education. Furthermore, the research presented in this thesis uncovers and engages with an infrastructure of open educational practices (OEP) that push at the parameters of policy for the reuse of open access research and pedagogic content in the design, development, distribution, adoption and evaluation of data-driven language learning systems. Study 1 employs automated content analysis to mine the concept of open educational systems and practices from qualitative reflections spanning 2012-2019 with stakeholders from an on-going multi-site design-based research study with the FLAX project. Design considerations are presented for remixing domain-specific open access content for academic English language provision across formal and non-formal higher education contexts. Primary stakeholders in the research collaboration include: Knowledge organisations that provide open access to content – libraries and archives including the British Library and the Oxford Text Archive, universities in collaboration with MOOC providers, and the CORE (COnnecting REpositories) open access aggregation service at the UK Open University; Researchers who mine and remix content into corpora and open data-driven language learning systems – converging from the fields of open education, computer science, and applied corpus linguistics; Knowledge users who reuse and remix content into open educational resources (OER) for blended learning – English for Academic Purposes practitioners from university language centres. Automated content analysis (ACA) was carried out on a corpus of interview and focus-discussion data with the three stakeholder groups in this research. Themes arising from the ACA point to affordances as well as barriers with the adoption of open policies and practices for remixing open access content for data-driven language learning applications in higher education against the backdrop of different business models and cultural practices present within participating knowledge organisations. Study 2 presents findings from an evaluative study on the design and efficacy of pedagogical English language corpora that have been derived from the content of two MOOCs, (Harvard University with edX, and the University of London with Coursera), and one networked course (Harvard Law School with the Berkman Klein Center for Internet and Society). Automated text and data mining approaches common to natural language processing were applied to these corpora, which were then linked to external open resources (e.g. Wikipedia, the FLAX LC system, WordNet), so that learners could employ the information discovery strategies (e.g. searching and browsing) that they have become accustomed to using through search engines (e.g. Google, Bing) for discovering and learning the domain-specific language features of their interests. Most notably, the non-formal learner participants in this research and development study had registered for courses in law; they had not signed up as language learners. This speaks volumes to the nature of many informal and non-formal higher education offerings, especially MOOCs, the majority of which are offered in English with no or limited support for learning unfamiliar or semi-familiar domain-specific terms and concepts encountered in their courses. This research triangulates system query data with user studies by way of self-reported learner and teacher perceptions from surveys (N=174) on the interface designs and usability of an automated open source digital library scheme, FLAX. Findings indicate a positive user experience with interfaces that include advanced affordances for course content search and retrieval of domain-specific terms and concepts that transcend the MOOC platform and Learning Management System (LMS) standard. Furthermore, survey questions derived from an open education research bank from the Hewlett Foundation are reused in this study and presented against a larger dataset from the Hewlett Foundation (N=1921) on motivations for the uptake of learning support open educational resources that have been designed for learning at scale in online higher education contexts. This study compares respondents' reported experiences of using domain-specific language learning support resources alongside other learning support techniques for minimally guided instruction in informal and non-formal online learning. Discussion on current research with the development of new user interface designs for the F-Lingo Chrome plug-in for FutureLearn MOOCs is also presented. Study 3 presents open principles for Computer Assisted Language Learning (CALL) design and practice. Open educational practices for designing and developing domain-specific language corpora with the open-source FLAX language project will be demonstrated and discussed with respects to the re-mix of openly licensed pedagogic, research and professional texts from the digital commons. The design of the open Law Collections in FLAX will be used as a running example throughout this study in response to the scarcity of reliable and specific resources for learning legal English. A loop-input discussion will also be presented on the legal development of the Creative Commons suite of licenses, which have enabled this novel approach to English language materials development practices with open educational resources and open access publications for data-driven learning in the area of English for Specific Academic Purposes (ESAP). This study presents a data-driven experiment in the legal English field to measure quantitatively the usefulness and effectiveness of employing a corpus-based online learning platform, FLAX, in the teaching of legal English. Participants in the study included 52 students in the fourth year of the Translation Degree program at the University of Murcia in Spain who were selected as informants over two semesters. All of the students’ linguistic competence level complied with the Common European Framework of Reference for Languages requirements for the B2 level. The informants were asked to write an essay on a given set of legal English topics, defined by the subject instructor as part of their final assessment. They were then divided into two groups: an experimental group who consulted the FLAX English Common Law MOOC collection as the single source of information to draft their essays, and a control group who used any information source available from the Internet in the traditional method for the design and drafting of essays before this experiment was carried out. The students’ essays provided the database for two small learner corpora. Findings from the study indicate that members of the experimental group appear to have acquired the specialised terminology of the area better than those in the control group, as attested by the higher term average obtained by the texts in the FLAX-based corpus (56.5) as opposed to the non-FLAX-based text collection, at 13.73 points below. Keywords: automated content analysis (ACA); blended learning; computer assisted language learning (CALL); corpus linguistics, data-driven learning (DDL); design-based research (DBR); design ethnography (DE); digital commons; English for academic purposes (EAP); English for specific purposes (ESP); English for specific academic purposes (ESAP); higher education (HE); learner corpora; learning support; massive open online courses (MOOCs); natural language processing (NLP); open access (OA); open educational practices (OEP); open educational resources (OER); open-source software (OSS); pedagogic corpora; text and data mining (TDM); terminology; user experience (UX)
... Several attempts have been made to automate parts of this process (Heyn 1996;Federico, Cattelan and Trombetti 2012;Green, Heer and Manning 2013;Läubli et al. 2013), in particular, to reduce human intervention in terms of time and effort without affecting translation quality. Recently, a solution implemented within the MATECAT Fig. 1. ...
This work focuses on the extraction and integration of automatically aligned bilingual terminology into a Statistical Machine Translation (SMT) system in a Computer Aided Translation scenario. We evaluate the proposed framework that, taking as input a small set of parallel documents, gathers domain-specific bilingual terms and injects them into an SMT system to enhance translation quality. Therefore, we investigate several strategies to extract and align terminology across languages and to integrate it in an SMT system. We compare two terminology injection methods that can be easily used at run-time without altering the normal activity of an SMT system: XML markup and cache-based model. We test the cache-based model on two different domains (information technology and medical) in English, Italian and German, showing significant improvements ranging from 2.23 to 6.78 BLEU points over a baseline SMT system and from 0.05 to 3.03 compared to the widely-used XML markup approach.
In unserer digitalisierten Welt verlagert sich das Lernen in die Cloud. Vom Unterricht in der Schule und der Tafel zum Tablet, hin zu einem lebenslangen Lernen in der Arbeitswelt und sogar darüber hinaus. Wie erfolgreich und attraktiv dieses zeitgemäße Lernen erfolgt, hängt nicht unwesentlich von den technologischen Möglichkeiten ab, die digitale Lernplattformen rund um MOOCs und Schul-Clouds bieten. Bei deren Weiterentwicklung sollten statt ökonomischen Messgrößen und KPIs die Lernenden und ihre Lernerfahrungen im Vordergrund stehen. Hierfür wurde ein Optimierungsframework entwickelt, das für die Entwicklung von Lernplattformen anhand verschiedener qualitativer und quantitative Methoden Verbesserungen identifiziert, priorisiert und deren Beurteilung und Umsetzung steuert. Datengestützte Entscheidungen sollten auf einer ausreichenden Datenbasis aufbauen. Moderne Web-Anwendungen bestehen aber oft aus mehreren Microservices mit jeweils eigener Datenhaltung. Viele Daten sind daher nicht mehr einfach zugänglich. Daher wird in dieser Arbeit ein Learning Analytics Dienst eingeführt, der diese Daten sammelt und verarbeitet. Darauf aufbauend werden Metriken eingeführt, auf deren Grundlage die erfassten Daten nutzbar werden und die somit zu verschiedenen Zwecken verwendet werden können. Neben der Visualisierung der Daten in Dashboards werden die Daten für eine automatisierte Qualitätskontrolle herangezogen. So kann festgestellt werden, wenn Tests zu schwierig oder die soziale Interaktion in einem MOOC zu gering ist. Die vorgestellte Infrastruktur lässt sich aber auch verwenden, um verschiedene A/B/n-Tests durchzuführen. In solchen Tests gibt es mehrere Varianten, die an verschiedene Nutzergruppen in einem kontrollierten Experiment erprobt werden. Dank der vorgestellten Testinfrastruktur, die in der HPI MOOC Plattform eingebaut wurde, kann ermittelt werden, ob sich für diese Gruppen statistisch signifikante Änderungen in der Nutzung feststellen lassen. Dies wurde mit fünf verschiedenen Verbesserungen der HPI MOOC Plattform evaluiert, auf der auch openHPI und openSAP basieren. Dabei konnte gezeigt werden, dass sich Lernende mit reaktivierenden Mails zurück in den Kurs holen lassen. Es ist primär die Kommunikation der unbearbeiteten Lerninhalte des Nutzers, die eine reaktivierende Wirkung hat. Auch Übersichtsmails, die die Forenaktivität zusammenfassen, haben einen positiven Effekt erzielt. Ein gezieltes On-Boarding kann dazu führen, dass die Nutzer die Plattform besser verstehen und hierdurch aktiver sind. Der vierte Test konnte zeigen, dass die Zuordnung von Forenfragen zu einem bestimmten Zeitpunkt im Video und die grafische Anzeige dieser Informationen zu einer erhöhten Forenaktivität führt. Auch die experimentelle Erprobung von unterschiedlichen Lernmaterialien, wie sie im fünften Test durchgeführt wurde, ist in MOOCs hilfreich, um eine Verbesserung der Kursmaterialien zu erreichen. Neben diesen funktionalen Verbesserungen wird untersucht wie MOOC Plattformen und Schul-Clouds einen Nutzen bieten können, wenn Nutzern nur eine schwache oder unzuverlässige Internetanbindung zur Verfügung steht (wie dies in vielen deutschen Schulen der Fall ist). Hier wird gezeigt, dass durch ein geschicktes Vorausladen von Daten die Internetanbindungen entlastet werden können. Teile der Lernanwendungen funktionieren dank dieser Anpassungen, selbst wenn keine Verbindung zum Internet besteht. Als Letztes wird gezeigt, wie Endgeräte sich in einem lokalen Peer-to-Peer CDN gegenseitig mit Daten versorgen können, ohne dass diese aus dem Internet heruntergeladen werden müssen.
Full-text available
Esta tesis doctoral trata el proceso de desarrollo de un programa de subtitulación adaptado al contexto de la enseñanza superior desde el punto de vista de una subtituladora profesional. Comienza con una revisión de los fundamentos teóricos de la Traducción Audiovisual y de la subtitulación con el objetivo de definir las características lingüísticas y semióticas del texto audiovisual y de su traducción. Continúa con el estudio de las normas extratextuales explícitas y con la identificación de las funcionalidades que posibilitan su aplicación para que el subtitulador pueda adaptar el texto a las expectativas de la cultura meta. El presente trabajo también aborda el aspecto profesional de esta modalidad mediante el estudio del impacto que los avances tecnológicos tienen en esta práctica y en las herramientas de subtitulación de las que disponen los traductores. Asimismo, presenta el uso de material audiovisual subtitulado en la enseñanza superior a través de un caso concreto de entorno de enseñanza a distancia multilingüe y multicultural. Este análisis de la subtitulación desde diferentes perspectivas concluye con el diseño de Miro Translate, una plataforma híbrida en la nube especialmente diseñada para el subtitulado de vídeos pedagógicos. Por último, se estudia la calidad de esta herramienta a través de una prueba de usabilidad que mide la satisfacción percibida de los usuarios, su eficiencia y efectividad con la finalidad de identificar las acciones necesarias para su mejora.
In this chapter, we will be reviewing state of the art machine translation systems, and will discuss innovative methods for machine translation, highlighting the most promising techniques and applications. Machine translation (MT) has benefited from a revitalization in the last 10 years or so, after a period of relatively slow activity. In 2005 the field received a jumpstart when a powerful complete experimental package for building MT systems from scratch became freely available as a result of the unified efforts of the MOSES international consortium. Around the same time, hierarchical methods had been introduced by Chinese researchers, which allowed the introduction and use of syntactic information in translation modeling. Furthermore, the advances in the related field of computational linguistics, making off-the-shelf taggers and parsers readily available, helped give MT an additional boost. Yet there is still more progress to be made. For example, MT will be enhanced greatly when both syntax and semantics are on board: this still presents a major challenge though many advanced research groups are currently pursuing ways to meet this challenge head-on. The next generation of MT will consist of a collection of hybrid systems. It also augurs well for the mobile environment, as we look forward to more advanced and improved technologies that enable the working of Speech-To-Speech machine translation on hand-held devices, i.e. speech recognition and speech synthesis. We review all of these developments and point out in the final section some of the most promising research avenues for the future of MT.
Full-text available
This paper discusses neural machine translation (NMT), a new paradigm in the MT field, comparing the quality of NMT systems with statistical MT by describing three studies using automatic and human evaluation methods. Automatic evaluation results presented for NMT are very promising, however human evaluations show mixed results. We report increases in fluency but inconsistent results for adequacy and post-editing effort. NMT undoubtedly represents a step forward for the MT field, but one that the community should be careful not to oversell.
Full-text available
The demographics of MOOC analytics show that the great majority of learners are highly qualified professionals, and not, as originally envisaged, the global community of disadvantaged learners who have no access to good higher education. MOOC pedagogy fits well with the combination of instruction and peer community learning found in most professional development. A UNESCO study therefore set out to test the efficacy of an experimental course for teachers who need but do not receive high quality CPD, as a way of exploiting what MOOCs can do indirectly to serve disadvantaged students. The course was based on case studies around the world of ICT in primary education, and was carried out to contribute to the UNESCO Education for All goal. It used a co-learning approach to engage the primary teaching community in exploring ways of using ICT in primary education. Course analytics, forums and participant surveys demonstrated that it worked well. The paper concludes by arguing that this technology has the power to tackle the large-scale educational problem of developing the primary-level teachers needed to meet the goal of universal education.
Full-text available
Conference Paper
Full-text available
We propose the use of character n-gram F-score for automatic evaluation of machine translation output. Character n-grams have already been used as a part of more complex metrics, but their individual potential has not been investigated yet. We report system-level correlations with human rankings for 6-gram F1-score (CHRF) on the WMT12, WMT13 and WMT14 data as well as segment-level correlation for 6-gram F1 (CHRF) and F3-scores (CHRF3) on WMT14 data for all available target languages. The results are very promising, especially for the CHRF3 score – for translation from English, this variant showed the highest segment-level correlations out-performing even the best metrics on the WMT14 shared evaluation task.
Conference Paper
Full-text available
Effectiveness in eLearning is identified as meeting the users' learning goals. It has been the subject of many researches which have led to a variety of dimensions and factors affecting it. However with the latest disruption of educational technology, MOOC has changed the perceptions of eLearning by taking eLearning to a new direction. Hence effectiveness dimensions & factors revealed before the introduction of MOOC required revising in order to cater to the new demands of eLearning in MOOC. At the same time there are many MOOC platforms introduced to the market leading to a potential issue of quality as not all the MOOCs provide effective results. Users are required to identify the effectiveness of MOOC. In searching for the solution to above problem, our research revealed a 10 dimensional framework for analyzing the effectiveness of eLearning in MOOC. Those are namely interactivity, pedagogy, collaboration, usability, network of opportunity, motivation, technology, content, support for learner and assessment. The framework was built using an instrument and tested with a sample size of 121 MOOC participants. Empirical results demonstrated that the instrument is within the acceptable range of verification and validation values. Therefore the 10 dimensional effectiveness framework assists as a benchmark for MOOC stakeholders.
p>The advent of massive open online courses was accompanied by bold claims about their potential to democratize access to high-quality education in poor countries. But critics contend that MOOCs have come nowhere near meeting those expectations. Most students already have degrees and live in developing countries, and only a small percentage complete their courses. Still, in absolute numbers MOOCs provide opportunities to many underserved students in the developing world. This is likely to continue as MOOCs evolve to provide blended learning and to take advantage of mobile technology. MOOCs should be viewed as an experiment, a fast-changing form of technology-enabled pedagogy that is likely to do far more good than harm in poor countries.</p
Conference Paper
We participated in the WMT 2016 shared news translation task by building neural translation systems for four language pairs, each trained in both directions: English<->Czech, English<->German, English<->Romanian and English<->Russian. Our systems are based on an attentional encoder-decoder, using BPE subword segmentation for open-vocabulary translation with a fixed vocabulary. We experimented with using automatic back-translations of the monolingual News corpus as additional training data, pervasive dropout, and target-bidirectional models. All reported methods give substantial improvements, and we see improvements of 4.3--11.2 BLEU over our baseline systems.
Massive open online courses (MOOCs) continue to attract press coverage as they change almost daily in their format, number of registrations, and potential for credentialing. An enticing aspect of the MOOC is its global reach. In this paper, we will focus on a type of MOOC called a cMOOC because it is based on the theory of connectivism and fits the definition of an open educational resource (OER) identified for this special edition of JALN. We begin with a definition of the cMOOC and a discussion of the connectivism on which it is based. Definitions and a research review are followed with a description of two MOOCs offered by two of the authors. Research on one of these MOOCs completed by a third author is presented as well. Student comments that demonstrate how a cMOOC can facilitate intercultural connections are shared. We end with reflections, lessons learned, and recommendations.
The idea of a Massive Open Online Course (MOOC) has attracted a lot of media attention in the last couple of years. MOOCs have been used mostly as stand-alone, online courses without credits. However, some researchers, teachers, colleges, and universities have attempted to utilize MOOCs in blended format in traditional classroom settings. This paper reviews some recent experiments in the context of current trends in MOOCs by examining methodologies utilized in blended MOOCs in a face-to-face environment. This paper further discusses the preliminary findings related to its effectiveness of learning outcomes and its impact on students and instructors in blended MOOCs format. The review of blended MOOCs in classrooms assists to form the emerging consensus on integrating MOOCs in conventional classroom settings, while highlighting potential opportunities and challenges one might face when implementing MOOCs in similar or entirely different contexts.
Massive open online courses (MOOCs) have commanded considerable public attention due to their sudden rise and disruptive potential. But there are no robust, published data that describe who is taking these courses and why they are doing so. As such, we do not yet know how transformative the MOOC phenomenon can or will be. We conducted an online survey of students enrolled in at least one of the University of Pennsylvania’s 32 MOOCs offed on the Coursera platform. The student population tends to be young, well educated, and employed, with a majority from developed countries. There are significantly more males than females taking MOOCs, especially in BRIC and other developing countries. Students’ main reasons for taking a MOOC are advancing in their current job and satisfying curiosity. The individuals the MOOC revolution is supposed to help the most — those without access to higher education in developing countries — are underrepresented among the early adopters.