Conference PaperPDF Available

Abstract and Figures

With the popular use of machine translation technology in the translation industry, post-editing has been widely adopted with the aim of improving target text quality. Every post-editing project needs to have specific guidelines for translators to comply with, since the guidelines may help clients and LSPs to set clear expectations, and save time and effort for translators. Different organizations make their own rules according to their needs. In this paper, we focus on comparing five sources of post-editing guidelines, and point out their overlaps and differences.
Content may be subject to copyright.
Baltic J. Modern Computing, Vol. 4 (2016), No. 2, 346-353
A Comparative Study of Post-editing Guidelines
Ke HU, Patrick CADWELL
ADAPT Centre, Dublin City University, Glasnevin, Dublin, Ireland
Ke.hu2@mail.dcu.ie, Patrick.cadwell2@mail.dcu.ie
Abstract: With the popular use of machine translation technology in the translation industry, post-
editing has been widely adopted with the aim of improving target text quality. Every post-editing
project needs to have specific guidelines for translators to comply with, since the guidelines may
help clients and LSPs to set clear expectations, and save time and effort for translators. Different
organizations make their own rules according to their needs. In this paper, we focus on comparing
five sources of post-editing guidelines, and point out their overlaps and differences.
Keywords: translation, light post-editing, full post-editing, post-editing guidelines
1. Introduction
Post-editing has been increasingly researched and implemented by Language Service
Providers (LSPs) in recent years as a result of the productivity gains it can bring to
translators (Guerberof, 2009; Federico et al., 2012; WEB, a). However, it has been noted
that there are no widely accepted general or standard post-editing (PE) guidelines
(DePalma, 2013; TAUS, 2016). Since needs vary, it seems that guidelines will never be
general or standard. Therefore, this paper is not going to set a general standard to post-
editing guidelines (hereafter abbreviated as PE guidelines), but select, review and
compare different PE guidelines which are representative (one set of guidelines
produced by a resource centre for the translation industry, one by a LSP, and three by
scholars). The research mainly focuses on the comparison of five proposals (O’Brien,
2010; Mesa-Lao, 2013; Flanagan and Christensen, 2014; Densmer, 2014; TAUS, 2016).
Since most organizations prefer to keep their PE guidelines for internal use only, we
just have access to the ones that have been published, which are not many. Among them,
we select the five proposals above as our focus because they have been published
recently, are relatively complete and are proposed in terms of two categories: light (rapid
or fast) post-editing and full (or heavy) post-editing. For the convenience of comparison,
the five selected sets of PE guidelines are general rather than language dependent or
aiming at specific contents.
2. Different Levels of Post-editing
According to ISO 17100:2015, post-editing means to “edit and correct machine
translation output (ISO, 2015)”. Allen (2003) pointed out the distinction between
different levels of post-editing. He first explained the determinant factors of the post-
editing level and proposed using inbound and outbound translation to categorize the
A Comparative Study of Post-editing Guidelines 347
types and levels of post-editing. For the inbound one, there are two levels: MT with no
post-editing (for browsing or gisting), and rapid post-editing. For the outbound one,
which means the translation is for publication or wide dissemination, the three levels are
MT with no post-editing, minimal post-editing and full post-editing. Apart from rapid
and full post-editing, the two popular categories, the intermediate category of minimal
post-editing was qualified as “fuzzy and wide-ranging (Allen, 2003:304)”. He then
provided a number of case studies on post-editing as well as the PE guidelines of the
European Commission Translation Service (ECTS), some of which were written by
Wagner (1985). Wagner’s guidelines are general and apply to projects with severe time
constraints. Her PE guidelines have been mentioned in the research of O’Brien (2010)
and Mesa-Lao (2013). Belam (2003) proposed her “do’s and don’ts” PE guidelines
under the categories of rapid and minimal post-editing.
Rather than differentiating between guidelines for light and full post-editing, the
Translation Automation User Society (TAUS) differentiated between two levels of
expected quality, including “good enough” quality, and “human translation quality”
(TAUS, 2016). However in this paper, for comparison purposes, we will still regard
them as light and full PE guidelines, which are the two most popular post-editing levels.
3. Definitions of Light and Full Post-editing
It can be seen clearly that most people or organizations dealing with translation have
very similar views about the two levels of post-editing. For light post-editing, it usually
means the quality is good enough or understandable, while for full post-editing, “human-
like” is usually the key word. According to TAUS (2016), full post-editing should reach
quality similar to “high-quality human translation and revision” or “publishable quality”,
while light post-editing should reach a lower quality, often referred to as “good enough”
or “fit for purpose”. As DePalma (2013), founder of Common Sense Advisory, put it:
“Light post-editing converts raw MT output into understandable and usable, but not
linguistically or stylistically perfect, text… A reader can usually determine that the text
was machine-translated and touched up by a human… Full post-editing, on the other
hand, is meant to produce human-quality output. The goal is to produce stylistically
appropriate, linguistically correct output that is indistinguishable from what a good
human translator can produce.” (DePalma, 2013, Online)
Iconic, a MT company based in Dublin, categorizes light and full post-editing by
answering three questions: what, when and result (WEB, a). It suggests that light post-
editing is for internal dissemination while full post-editing is for wide dissemination or
certified documentation.
4. Comparative Studies of PE Guidelines
TAUS established PE guidelines in partnership with CNGL (Centre for Next Generation
Localization) in 2010 with the hope that organizations could use the guidelines as a
baseline and tailor them for their own purposes as they required. This is the first attempt
at publicly available industry-focused PE guidelines. The guidelines start with some
recommendations on reducing the level of post-editing required. TAUS highlighted two
main criteria that determined the effort involved in post-editing: the quality of the MT
raw output and the expected end quality of the content. They then proposed the
348 Hu and Cadwell
guidelines according to the different levels of expected quality. Flanagan and
Christensen (2014) carried out a research project and tested the TAUS PE guidelines
(2010) among translation trainees. Based on the result, they developed their own set of
PE guidelines for use in class. They adopted the TAUS guidelines for light post-editing
and proposed their tailored guidelines for full post-editing according to the TAUS
baseline for translator training purposes. Recently in 2016, TAUS updated their PE
guidelines to include a greater amount of detail than the previous set. The updated
guidelines have been divided into five parts. In addition to an updated version of the
previous guidelines, which constitutes its second part, the other four parts are as follows:
evaluating post-editor performance, post-editing productivity, pricing machine
translation PE guidelines and about the MT guidelines. For the purposes of this paper,
we will only discuss the second part that elaborates on the PE guidelines of different
levels. This part is almost a copy of the previous guidelines, but there is one specific
difference in that it says “human translation quality” in the caption for the high level
post-editing (although it still uses “quality similar or equal to human translation” in the
body of the text).
At the 2010 AMTA conference, O’Brien presented a tutorial on post-editing. She
first introduced the general PE guidelines of Wagner (1985), then the guidelines on light
and full post-editing respectively. Mesa-Lao (2013) restated O’Brien’s general PE
guidelines in his study. He reported his suggestions on how to decide whether a MT
output should be recycled in post-editing or not. He also mentioned the rules of
Microsoft (the “5-10 second evaluation” rule and the “high 5 and low 5” rule) on making
these decisions in his research.
Although LSPs possess their own tailored PE guidelines, very few have been
released online. Lee Densmer, senior manager at Moravia, wrote down her PE guidelines
in her blog at the website of Moravia. The guidelines may be her personal opinion but
can represent the attitudes of Moravia to some extent. Similarly to Allen (2003),
Densmer (2014) listed the determinant factors of post-editing levels. They both believed
that the client and the expectation to the level of quality played important roles. Based on
their date of publication, we could argue that determinant factors listed by Densmer are
more related to modern technology. Let us take TM as an example. While the factors
listed by Allen are more traditional, including the time of translation, the life expectancy
and perishability of the information, Densmer pointed out that the key phrases for light
post-editing were “factual correctness” and “good enough”, which are in line with
TAUS. She argued that light post-editing was not an easy job for linguists, due to the
fact that linguists had to try their best to turn a blind eye to those ‘minorerrors. With
reference to full post-editing, she indicated that “the effort to achieve human level
quality from MT output may exceed the effort to have it translated by a linguist in the
first place (Densmer, 2014)”, and Iconic (WEB, a) supports this assertion. In the end, she
exposed the “shades of grey” which referred to the fact that many clients want the
quality of full post-editing with the price and speed of light post-editing.
Inspired by the categories used in the LISA QA Model (Localization Industry
Standards Association Quality Assurance Model) and SAE (Society of Automotive
Engineers) J2450 translation quality metric, we created Tables 1 and 2 as follows to
compare the five proposals of PE guidelines. According to the variables in the left
column, we listed all the corresponding requirements of the five proposals. There are
some differences in terminology used by authors on PE, but these terms appear to refer
to roughly the same concept, such as “accurate” and “correct”. If the guidelines did not
mention the variable, the cell was left blank.
A Comparative Study of Post-editing Guidelines 349
Table 1. Comparative study of light PE guidelines
LIGHT
POST-
EDITING
TAUS (2016)
(FlANAGAN &
CHRISTENSEN,
2014)
O’BRIEN (2010)
MESA-LAO
(2013)
DENSMER (2014)
Accuracy
TT communicates the
same meaning as ST
Important
Important
Factually accurate
Terminology
No need to
research
No need to spend
too much time
researching if
incorrect
Be consistent
Grammar
May not be perfect
Not a big concern
No need to correct
unless the
information has
not been fully
delivered
Correct only the most
obvious errors
Semantics
Correct
Correct
Spelling
Apply basic rules
Apply basic rules
Syntax
Might be unusual
Can be ignored
Do not change
Style
No need
No need
Restructure
No need if the
sentence is correct
No need if can be
understood
Rewrite confusing
sentences
Culture
Edit if necessary
Edit if necessary
Information
Fully delivered
Others
Use as much raw MT
output as possible
Textual standards
are not important;
very high
throughput
expectation; low
quality expectations
No need to
change a word if
correct
Fix machine-induced
mistakes; delete
unnecessary or extra
machine-generated
translation
alternatives
From Table 1, it can be seen that all proposals value the accuracy of the message and
correctness of semantics by light post-editing, while grammar, syntax and style are not a
big concern. O’Brien and Mesa-Lao believe that there is no need to spend too much time
researching incorrect terminology, while Densmer contends that terminology should be
consistent. TAUS, Flanagan and Christensen, and O’Brien hold that the spelling fixes
should be applied with basic rules, and the text should adapt to the target culture. If the
sentence is understandable or correct, most proposals express that it should not be
restructured. O’Brien clearly points out the quality expectation for light post-editing is
low. Densmer emphasizes machine-induced errors and translation alternatives in her
guidelines.
350 Hu and Cadwell
Table 2. Comparative study of full PE guidelines
FULL
POST-EDITING
TAUS (2016)
FLANAGAN &
CHRISTENSEN
(2014)
MESA-LAO
(2013)
DENSMER (2014)
Accuracy
TT
communicates
same meaning
as ST
Important
Absolutely accurate
Terminology
Key
terminology is
correct
Key terminology is
correct
Apply the
term as used
in the term
database for
any incorrect
terminology
Consistent and
appropriate
Grammar
Correct
Correct
Correct
Correct
Semantics
Correct
Correct
Correct
Correct
Punctuation
Correct
Apply basic rules
Correct
Spelling
Apply basic
rules
Apply basic rules
Correct
Syntax
Normal
Correct
Make modifications
in accordance with
practices for the TL
Style
Fine
Not important
Consistent,
appropriate and
fluent
Restructure
No need if the
language is
appropriate
No need if the
sentence is
semantically
correct
Rewrite confusing
sentences
Culture
Edit if
necessary
Edit if necessary
Adapt all cultural
references
Information
Fully delivered
Fully delivered
Formatting
Correct
Ensure the same
ST tags are
present and in the
correct positions;
Correct (including
tagging)
Others
Basic rules
apply to
hyphenation;
human
translation
quality
Use as much raw
MT output as
possible; ensure
the untranslated
terms belong to
the client’s list of
‘Do not translate’
terms
No need to
change a
word if it is
correct;
accept the
repetitive MT
output
Perfect faithfulness
to the source text; fix
machine-induced
mistakes; delete
unnecessary or extra
machine-generated
translation
alternatives; cross-
reference
translations against
other resources;
human translation
quality
A Comparative Study of Post-editing Guidelines 351
Regarding full post-editing, TAUS and Densmer expect that the quality should have
no difference with human translation, and they emphasize the significance of fine style.
However, O’Brien and Mesa-Lao do not agree with a need to pay much attention to the
style. They expect the quality after full post-editing be medium rather than equal to
translation from scratch. Should the quality after full post-editing be the same as human
translation or maintain the traces of machine translation? We can see from Table 2,
especially the Others row that the resource centre and LSP are more inclined to human
translation quality than the scholars. If full post-editing should reach human translation
quality, it still remains a question whether full post-editing is more pragmatic than
translation from scratch in terms of cost. It is even debatable if post-editing can actually
bring productivity gains, which leads to scepticism toward the benefits of post-editing.
Guerberof (2009) and Federico et al. (2012) reported productivity gains in their research,
while Gaspari et al. (2014) found that post-editing could lead to productivity losses over
translation from scratch.
The requirements of the full PE guidelines surpass the considerations of the light PE
guidelines in terms of accuracy, semantics and culture in particular. Different from light
PE guidelines, most full PE guidelines require the correctness of terminology, grammar,
punctuation, syntax and formatting.
5. Conclusions
From this comparative study, we can see that the existing PE guidelines have many
overlaps, especially for light post-editing. The main differences lie in the full PE
guidelines and concern the requirement for style and the expected quality of the target
text, which we believe depends on the use and type of the text.
As we mentioned before, there are no standard PE guidelines. DePalma (2013)
contends that clients should share with LSPs exactly what light and full post-editing is to
be included before contracting for a job. Densmer (2014) also asserts that the quality
levels, throughputs, and expectations must be defined in advance. We agree with their
ideas and advise LSPs and their clients to discuss and create their own tailored PE
guidelines together beforehand.
In addition to the general PE guidelines above, there are other sources of PE
guidelines which are either language-dependent or aim-specific. Such guidelines include,
for example, the GALE PE guidelines (WEB, b), PE guidelines with a focus on Japanese
(Tatsumi, 2010), ACCEPT’s guidelines for monolingual and bilingual post-editing
(ACCEPT, 2011), language dependent (English-Spanish) PE guidelines (Rico and
Ariano, 2014), PE guidelines for BOLT Machine Translation Evaluation (WEB, c), and
PE guidelines for lay post-editors in an online community (Mitchell, 2015).
352 Hu and Cadwell
Acknowledgements
This work is supported by the Science Foundation of Ireland (SFI) ADAPT project
(Grant No.: P31021). The authors would also like to thank Dr. Sharon O’Brien for her
helpful comments and suggestions.
References
ACCEPT. (2012). Seminar Material on Post-editing Edition 2, available at
http://cordis.europa.eu/docs/projects/cnect/9/288769/080/deliverables/001-
D622SeminarMaterialonPostEditingEdition2.pdf
Allen, J. (2003). Post-editing. Computers and Translations: A Translator’s Guide. 35, 297-317.
Belam, J. (2003). “Buying up to falling down”: a deductive approach to teaching post-editing, In:
Proceedings of MT Summit IX, Workshop on Teaching Translation Technologies and Tools
(27 Sept. 2003, New Orleans, USA), pp 1-10.
Densmer, L. (2014). Light and Full MT Post-Editing Explained, available at
http://info.moravia.com/blog/bid/353532/Light-and-Full-MT-Post-Editing-Explained
DePalma, D. (2013). Post-editing in practice, available at
http://www.tcworld.info/e-magazine/translation-and-localization/article/post-editing-in-
practice/
Federico, M., Cattelan, A., Trombetti, M. (2012), Measuring user productivity in machine
translation enhanced computer assisted translation, In: Proceedings of the Tenth Conference
of the Association for Machine Translation in the Americas, AMTA 2012 (28 Oct. 1 Nov.
2012, San Diego, USA).
Flanagan, M., Christensen, T.P. (2014). Testing post-editing guidelines: how translation trainees
interpret them and how to tailor them for translator training purposes. The Interpreter and
Translator Trainer, vol. 8, no. 2, pp. 257-275.
Gaspari, F. 2014, Perception vs Reality: Measuring Machine Translation Post-Editing
Productivity, In: Proceedings of the Third Workshop on Post-Editing Technology and
Practice at the 11th Conference of the Association for Machine Translation in the Americas,
AMTA 2014 (22-26 Oct. 2014, Vancouver, Canada), pp. 60-72.
Guerberof, A. (2009). Productivity and quality in MT post-editing, In: Proceedings of MT Summit
XII-Workshop: Beyond Translation Memories: New Tools for Translators MT, AMTA 2009
(26-30 Aug. 2009, Ottawa, Canada).
ISO, ISO 17100:2015: Translation services Requirements for translation services, available at
http://www.iso.org/iso/catalogue_detail.htm?csnumber=59149
Mesa-Lao, B. (2013). Introduction to post-editing The CasMaCat GUI, available at
http://bridge.cbs.dk/projects/seecat/material/hand-out_post-editing_bmesa-lao.pdf
Mitchell, L. (2015). The potential and limits of lay post-editing in an online community, In:
Proceedings of the 18th Annual Conference of the European Association for Machine
Translation, EAMT 2015 (11-13 May. 2015, Antalya, Turkey).
O’Brien, S. (2010). Introduction to Post-Editing: Who, What, How and Where to Next? Available
at http://amta2010.amtaweb.org/AMTA/papers/6-01-ObrienPostEdit.pdf
Rico-Pérez, C., Ariano-Gahn, M. (2014). Defining language dependent post-editing rules: the case
of the language pair English-Spanish. In O'Brien, S., Balling, L.M., Carl, M., Simard, M.,
Specia, L., (eds.), Post-editing of machine translation: processes and applications.
Newcastle, pp. 299-322.
A Comparative Study of Post-editing Guidelines 353
Tatsumi, M. (2010). Post-editing machine translated text in a commercial setting: Observation
and statistical analysis. PhD thesis, Dublin City University, Dublin, Ireland.
TAUS. (2010). MT Post-editing Guidelines, available at
https://www.taus.net/academy/best-practices/postedit-best-practices/machine-translation-
post-editing-guidelines
TAUS. (2016). TAUS Post-Editing Guidelines, available at
https://www.taus.net/think-tank/articles/postedit-articles/taus-post-editing-guidelines
Wagner, E. (1985). Post-editing Systran - A challenge for Commission Translators, Terminologie
et Traduction, no. 3, pp. 1-7.
WEB (a). Post-Edited Machine Translation. http://iconictranslation.com/solutions/post_edited_mt/
WEB (b). Post Editing Guidelines for GALE Machine Translation Evaluation.
http://projects.ldc.upenn.edu/gale/Translation/Editors/GALEpostedit_guidelines-3.0.2.pdf
WEB (c). Post Editing Guidelines for BOLT Machine Translation Evaluation.
http://www.nist.gov/itl/iad/mig/upload/BOLT_P3_PostEditingGuidelinesV1_3_3.pdf
Received May 2, 2016, accepted May 8, 2016
... As far as guidelines are concerned, although these may differ significantly depending on the scenario, certain patterns can be found. Hu and Cadwell (2016) compared five sets of PE guidelines and identified overlaps and discrepancies. In the case of full PE, most guidelines were found to share the same vision regarding accuracy, terminology and grammar, but express different views on style. ...
... In the case of full PE, most guidelines were found to share the same vision regarding accuracy, terminology and grammar, but express different views on style. Most guidelines also encourage post-editors to use as much of the MT output as possible, thus avoiding preferential changes (Hu and Cadwell, 2016;Nitzke and Hansen-Schirra, 2021). Moreover, the TAUS guidelines (TAUS, 2016) and ISO 1858 standard for PE services (ISO, 2017) are commonly recognised as references in the industry. ...
Conference Paper
The improvement in quality and precision of machine translation (MT) outputs has captured the attention of both the academic research community and industry professionals. This has led to a focus on optimising post-editing (PE) tasks to enhance the training of translation engines and improve the workflow of post-editors. However, translation environments, PE tasks, and quality assessment processes are still dependent on computer-assisted translation (CAT) tools. The neural MT paradigm requires a workstation specifically designed for PE tasks, as the requirements for PE differ from those of translation, particularly in areas such as PE process analysis, PE guidelines, and PE skill sets. The aim of this paper is to establish a path for the development of a workstation (WS) specifically customised for the needs of PE tasks. To achieve this objective, an analysis of the main Translator’s WS initiatives has been conducted, via literature review, to identify the functions that were initially developed to enhance the translator’s workflow and that, subsequently, were employed for the development of present-day CAT tools. Then, we will identify the common requirements in PE tasks, which will serve as the basis for presenting a prototype tool tailored to meet the needs of PE.
... An obvious technology at stake is NMT, whose importance is particularly high due to the increasing prevalence of the MTPE production model. However, the practice of MTPE is changing rapidly and being adopted differently in different sectors of the industry, which can be observed, for example, in the tiered approach to quality (Hu and Cadwell 2016;Nunziatini and Marg 2020). The terms such as 'MT' and 'post-editing' may have different meaning in, say, five years' time, which will affect the value of the scale if it is to be used for longitudinal studies. ...
Article
Full-text available
This article discusses the conceptual and methodological aspects of the Translator WRQoL (Work-related Quality of Life) survey and provides some preliminary results and observations based on the first pilot study. The survey is being developed to measure translators’ work satisfaction and motivation in the context of job digitalisation and automation. Literature suggests that translators’ work satisfaction and their career motivation have been adversely affected. The survey being developed in this study intends to quantitatively measure the causes of the adverse effects using psychometric-strong scales. The ultimate goal is to administer the Translator WRQoL survey on a large scale, and using SEM (Structural Equation Modelling), to identify the causal relationships between the constructs measured by the scale and to determine what kind of translators (regarding worker profiles and attitudes to technology and other factors) have high/low levels of work-related quality of life and are more/less willing to stay in the profession.
... The primary objective of fast MTPE is to enable audiences to grasp the essence of the translation, while conventional MTPE aims to meet the standard of publication. Shofner (2021) argued that light MTPE should be sufficient for information Hu and Cadwell (2016), MTPE prioritizes factual accuracy, terminological consistency, correct grammar, correct semantics, rewriting of confusing sentences, and correction of other MT errors, such as machine-generated unnecessary or extra words. Full MTPE, on the other hand, emphasizes accurate messages, terminological consistency, appropriate terminology, correct grammar, semantics, punctuation, spelling, modification of incorrect syntactic structure, correct formatting and correction of other MT errors such as stylistic awkwardness (as cited in Shih, 2021). ...
Article
The present research aims to conduct a process-oriented analysis to measure whether a group of graduate students enrolled in a translation course made steady progress in their performance of identifying machine translation (MT) errors and post-editing MT drafts of company web texts and news texts. A mixed methods approach consisting of quantitative and qualitative analyses was used. The findings show that there was a steady decline in the average number of MT errors that students could not spot or correctly identify in their three assignments. However, there was no significant improvement in student MTPE performance, with only a slight decrease in errors in the final MTPE assignment, which still remained worse than the first one. Finally, student responses in their reflection essays indicated that their reception of MT and MTPE had shifted from negative denial to positive acceptance. Overall, the findings of the present study reveal the need to extend the period of MTPE training for students. Incorporating MT training into the translation course has proven to be worthwhile for students, as it helps to dispel students’ previous misconception about MT and MTPE.
... Nevertheless, the nature of much translation work is fundamentally changing, and postediting machine-generated content, for instance, has already become a widespread practice. Post-editing guidelines often differentiate between full and light edits, with the latter only aiming for comprehensible output, which does not have to be perfect in terms of style or grammar (Hu and Cadwell 2016). The distinction signals the continued hold of the conduit metaphor on perceptions about what translation activity entails: MT output is seen as containing the essence of a linguistic message, which the human translator is then invited to repackage to a variable degree. ...
Article
Full-text available
This article presents the results of a corpus-assisted study focused on the expression lost in translation in a corpus of English-language online newspapers (NOW), and in two scholarly bibliographic databases (BITRA and SCOPUS). On the surface, the phrase may seem to indicate negative perceptions of translation practice. However, a study of several hundred occurrences of the cliché paints a more complex picture involving a variety of communicative practices and settings. Many occurrences of the phrase address, for instance, broader issues of cultural and interpersonal misunderstanding. In such cases, the perceived failure to establish a meaningful connection can often be ascribed to the absence of attempts at mediation or transmission, thus signalling recognition that the greatest losses occur not because of, but by lack of translation. In addition, the data indicate that lost in translation's varied usage patterns can be understood in terms of two competing metaphorical frames, namely one of transportation and one of orientation: in translation , one can lose something, but one can just as well get lost. The implications of both metaphorical mappings are further addressed with reference to the issue of visibility, and to discussions about the proper scope of translation studies research. Abstract features of human experience are commonly understood with reference to more concrete, physical objects and relationships. We tend to think of time, for instance, as a valuable resource, something that you can lose, or waste, or run out of.
... PROMT, Trados, Phrase, DeepL, Tilde, etc.) and best practices in studying (e.g. European Association for Machine Translation (Hu, Cadwell 2016;Wang et al. 2022 , research (e.g. Tra&Co Centre for Translation and Cognition at Johannes Gutenberg University of Mainz) and dissemination (e.g. ...
Chapter
Full-text available
Artificial intelligence technologies are currently challenging both machine translation and natural language processing. A better understanding of the social, economic and ethical stakes is urgently needed. This collective work explores the possibility and the contours of a new consensus between the human uses of language and the contributions of the machine.
... 196). Consequently, post-translation tasks begin after light post-editing [43] has been applied to the raw MT output, i.e., the correction of grammar and spelling mistakes and of terminological errors, in order to obtain an intelligible translated text that accurately represents the content of the source text. ...
Article
Full-text available
Teaching translation in higher education has undeniably been impacted by the innovations brought about by machine translation (MT), more particularly neural machine translation (NMT). This influence has become significantly more noticeable in recent years, as NMT technology progresses hand in hand with artificial intelligence. A case study supported by a questionnaire conducted among translation students (bachelor’s and master’s programmes at ISCAP) probed the degree of student satisfaction with CAT tools and revealed that they favour the use of MT in their translation practices, focusing their work on post-editing tasks rather than exploring other translation strategies and complementary resources. Although MT cannot be disregarded in translation programmes, as machine-generated translations make up an increasingly larger amount of a professional translator’s output, the widespread use of MT by students poses new challenges to translators’ training, since it becomes more difficult to assess students’ level of proficiency. Translation teachers must not only adapt their classroom strategies to accommodate these current translation strategies (NMT) but also, as intended by this study, find new, adequate methods of training and assessing students that go beyond regular translation assignments while still ensuring that students acquire the proper translation competence. Thus, as the use of NMT makes it considerably more challenging to assess a student’s level of translation competence, it is necessary to introduce other activities that not only allow students to acquire and develop their translation competence as defined in the EMT (European Masters in Translation) framework but also enable teachers to assess students more objectively. Hence, this article foregrounds a set of activities usually regarded as “indirect tasks” for technical translation courses that hopefully results in the development of student translation skills and competence, as well as provides more insights for teachers on how to more objectively assess students. It is possible, then, to conclude that these activities, such as different types of paraphrasing and error-detection tasks, may have the potential to encourage creative thinking and problem-solving strategies, giving teachers more resources to assess students’ level of translation competence.
Article
Machine translation post-editing quality evaluation has received relatively little attention in translation pedagogy to date. It is a time-consuming process that involves the comparison of three texts (source text, machine translation and student post-edited text) and the systematic identification and correction of students’ edits (or absence thereof) of machine translation (MT) output. There are as yet no widely available, standardized, user-friendly annotation systems for use in translator education. In this article, we address this gap by describing the Machine Translation Post-Editing Annotation System (MTPEAS). MTPEAS includes a taxonomy of seven categories that are presented in easy-to-understand terms: Value-adding edits, Successful edits, Unnecessary edits, Incomplete edits, Error-introducing edits, Unsuccessful edits, and Missing edits. We then assess the robustness of the MTPEAS taxonomy in a pilot study of 30 students’ post-edited texts and offer some preliminary findings on students’ MT error identification and correction skills.
Article
Full-text available
The purpose of this research is to establish the challenges that professional translators face when translating using CAT and MT, particularly for the English-Arabic language pair. The research design used in the study is a comparative descriptive research design to compare different translations of source texts from English to Arabic using the SDL Trados CAT tool. The first research question is to define the most frequent types of errors in the translated texts produced by the participants of the study. The study seeks to establish some of the difficulties that translators encounter in relation to language and culture such as grammar, syntax, spelling, punctuation, style, formatting, accuracy, terminology and semantics. The research aimed to further raise awareness of these issues and how to address them in post-editing, which has been integrated into translators’ education and training to enhance productivity and the quality of the end product. The study also seeks to offer suggestions to the researchers and developers of technology in the improvement of the MT systems especially for the Arabic to English translation. The results of the analysis show that the most severe problem was observed in the syntactic and grammatical aspects of the text when translating from English to Arabic. The findings can be of benefit to developers to enhance the translation tools for efficiency, especially for Arabic to English translation.
Article
A partir del análisis de varios ejemplos reales de trabajo, este artículo compara la mayor o menor precisión léxica ofrecida por los diccionarios bilingües frente a los motores de traducción automática basados en redes neuronales cuando los usuarios no profesionales acuden a ellos para traducir textos. Analizaremos el comportamiento de tres de estos motores de traducción automática (Google Traductor, Bing Microsoft Translator y el traductor de DeepL), y someteremos a evaluación un corpus compuesto por documentos reales de trabajo en el campo de la traducción. Dirigiremos nuestra atención a aquellos fallos léxicos derivados del uso de motores de traducción automática que ponen en riesgo la comprensión del texto final. Y, para terminar, compararemos los resultados que habríamos obtenido si, para esas mismas tareas, hubiéramos recurrido a diccionarios bilingües en vez de a motores de traducción automática. El estudio constata cómo en el caso de la traducción de textos más o menos complejos, la respuesta generada por los motores de TA resulta insuficiente, y cómo los datos lexicográficos recogidos en diferentes diccionarios bilingües en línea, por lo general, aportan información lexicográfica mucho más completa y adecuada para satisfacer las necesidades comunicativas del usuario.
Chapter
Full-text available
Machine translation has recently improved dramatically in accuracy, convenience, and accessibility, and while it has been widely adopted, it remains far from perfect. This chapter considers the perils and potential benefits of machine translation in English-medium of instruction transnational higher education. The perils of machine translation in this context are that it can stunt language learning and cause miscomprehension; it problematizes authorship; it facilitates novel forms of plagiarism; and it can hurt transnational higher education institutions' reputations and devalue their degrees. The potential benefits of machine translation are that it can aid reading comprehension, raise writing level, and help student retention; it provides an opportunity for critically engaging with digital technology and its appropriate use; and it facilitates instruction and research beyond instructor and student language competencies, which can broaden and transnationalize the often Americentric and Eurocentric content of transnational higher education.
Article
There is a growing interest in machine translation (MT) and post-editing (PE). MT has been around for decades, but the use of the technology has grown significantly in the language industry in recent years, while PE is still a relatively new task. Consequently, there are currently no standard PE guidelines to use in translator training programmes. Recently, the first set of publicly available industry-focused PE guidelines (for 'good enough' and 'publishable' quality) were developed by Translation Automation User Society (TAUS) in partnership with the Centre for Global Intelligent Content (CNGL), which can be used as a basis on which to instruct post-editors in professional environments. This paper reports on a qualitative study that investigates how trainee translators on an MA course, which is aimed at preparing the trainees for the translation industry, interpret these PE guidelines for publishable quality. The findings suggest trainees have difficulties interpreting the guidelines, primarily due to trainee competency gaps, but also due to the wording of the guidelines. Based on our findings we propose training measures to address these competency gaps. Furthermore, we provide post-editing guidelines that we plan to use for our own post-editing training.
Article
Machine translation systems, when they are used in a commercial context for publishing purposes, are usually used in combination with human post-editing. Thus understanding human post-editing behaviour is crucial in order to maximise the benefit of machine translation systems. Though there have been a number of studies carried out on human post-editing to date, there is a lack of large-scale studies on post-editing in industrial contexts which focus on the activity in real-life settings. This study observes professional Japanese post-editors’ work and examines the effect of the amount of editing made during post-editing, source text characteristics, and post-editing behaviour, on the amount of post-editing effort. A mixed method approach was employed to both quantitatively and qualitatively analyse the data and gain detailed insights into the post-editing activity from various view points. The results indicate that a number of factors, such as sentence structure, document component types, use of product specific terms, and post-editing patterns and behaviour, have effect on the amount of post-editing effort in an intertwined manner. The findings will contribute to a better utilisation of machine translation systems in the industry as well as the development of the skills and strategies of post-editors.
Introduction to post-editing – The CasMaCat GUI, available at http://bridge.cbs.dk/projects/seecat The potential and limits of lay post-editing in an online community
  • B Mesa-Lao
  • L Mitchell
Mesa-Lao, B. (2013). Introduction to post-editing – The CasMaCat GUI, available at http://bridge.cbs.dk/projects/seecat/material/hand-out_post-editing_bmesa-lao.pdf Mitchell, L. (2015). The potential and limits of lay post-editing in an online community, In: Proceedings of the 18th Annual Conference of the European Association for Machine Translation, EAMT 2015 (11-13 May. 2015, Antalya, Turkey).
Defining language dependent post-editing rules: the case of the language pair English-Spanish Post-editing of machine translation: processes and applications
  • C Rico-Pérez
  • M Ariano-Gahn
Rico-Pérez, C., Ariano-Gahn, M. (2014). Defining language dependent post-editing rules: the case of the language pair English-Spanish. In O'Brien, S., Balling, L.M., Carl, M., Simard, M., Specia, L., (eds.), Post-editing of machine translation: processes and applications. Newcastle, pp. 299-322.
Light and Full MT Post-Editing Explained
  • L Densmer
Densmer, L. (2014). Light and Full MT Post-Editing Explained, available at http://info.moravia.com/blog/bid/353532/Light-and-Full-MT-Post-Editing-Explained DePalma, D. (2013). Post-editing in practice, available at http://www.tcworld.info/e-magazine/translation-and-localization/article/post-editing-inpractice/
Post-editing. Computers and Translations: A Translator's Guide
  • J Allen
Allen, J. (2003). Post-editing. Computers and Translations: A Translator's Guide. 35, 297-317.
Buying up to falling down": a deductive approach to teaching post-editing
  • J Belam
Belam, J. (2003). "Buying up to falling down": a deductive approach to teaching post-editing, In: Proceedings of MT Summit IX, Workshop on Teaching Translation Technologies and Tools (27 Sept. 2003, New Orleans, USA), pp 1-10.
Productivity and quality in MT post-editing, In: Proceedings of MT Summit XII-Workshop: Beyond Translation Memories: New Tools for Translators MT
  • A Guerberof
Guerberof, A. (2009). Productivity and quality in MT post-editing, In: Proceedings of MT Summit XII-Workshop: Beyond Translation Memories: New Tools for Translators MT, AMTA 2009 (26-30 Aug. 2009, Ottawa, Canada).
Perception vs Reality: Measuring Machine Translation Post-Editing Productivity, In: Proceedings of the Third Workshop on Post-Editing Technology and Practice at the 11th Conference of the Association for Machine Translation in the Americas
  • F Gaspari
Gaspari, F. 2014, Perception vs Reality: Measuring Machine Translation Post-Editing Productivity, In: Proceedings of the Third Workshop on Post-Editing Technology and Practice at the 11th Conference of the Association for Machine Translation in the Americas, AMTA 2014 (22-26 Oct. 2014, Vancouver, Canada), pp. 60-72.