Thesis

Dorer, Brita. 2020. Advance translation as a means of improving source questionnaire translatability?: Findings from a think-aloud study for French and German. Berlin: Frank & Timme.

Authors:
To read the full-text of this research, you can request a copy directly from the author.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... The two methods that have been implemented for detecting and minimizing issues related to translatability and (inter)cultural portability of source questionnaires are Translatability Assessment (see, e.g., Acquadro et al. 2018or Stathopoulou et al. 2019 and Advance Translation (Dorer 2020). Both methods use the activity of translating while the source questionnaire is developed. ...
... For example, to measure religiosity a different question might be needed in a Christian country compared to a Muslim one. (Fitzgerald et al. 2011:570) Advance translation has been applied during source questionnaire development by several cross-cultural surveys, the most prominent one being the ESS, where advance translations have been implemented systematically since round 5 (i.e., 2009-2010) (Dorer 2015(Dorer , 2020. ...
... This example shows how important advance translation is for the quality of the final survey data. For a detailed explanation of advance translation carried out in the ESS, see Dorer (2020). ...
Article
Advance translation is a method of source questionnaire development for multilingual survey projects to enhance translatability and (inter)cultural por-tability. The aim is to minimize translation issues in the final translation stage. I empirically tested the results of a previously conducted advance translation in a think-aloud study and analyzed the utterances made in a mixed-method approach , calculating chi-square statistics and cross-checking these by observational notes of the think-aloud sessions. My study confirms the usefulness of advance translation in making source items better to translate, thus improving final translation quality. It appears to be particularly useful for comprehensibility issues of the source text, irrespective of the target language. I recommend that advance translations be carried out into all languages and cultures into which the final source questionnaire is to be translated. This will improve source questionnaire translatability and, thus, final translation and overall cross-cultural data quality.
... This may be achieved by drawing on written documentation on particular decisions (cf. Behr & Zabal 2020) or by accessing recordings of think aloud translations or team discussions (e.g., Behr 2009, Dorer 2020. Thus, we could learn more about successful translation strategies that lead to the desired outcome in the data. ...
Article
Full-text available
It is generally taken for granted that comparability in comparative research hinges, among others, on the quality of questionnaire translations. However, what do slight differences in translation mean for respondents’ answers? In this article, we look at a combination of quantitative evidence from split-ballot experiments and qualitative evidence from additional probing questions for three items that were translated according to different translation methods, resulting in different translations, e.g., for “our national way of life.” Two of the three items do not show any quantitative differences between translation versions when implemented in split-ballot experiments. However, using open-ended probing questions we delved deeper into the effects of different translation versions. This allowed us to show that different translations do indeed change respondent understanding. We suggest mechanisms that may lead to different translations (not) having an impact on the data, and we also try to align the results to the notion of equivalence/comparability in translation. Ultimately, we showcase the usefulness of web probing for exploring different translation understandings.
... In the following, we will be using the term translation, given its usage in cross-cultural survey methodology (Harkness et al., 2010b;Lyberg et al., 2021) and in large-scale studies such as the ESS, the ISSP, EVS, etc. In these studies, the source instruments are typically designed with cross-cultural implementation in mind so that cross-cultural relevance and translatability are considered early on (Dorer, 2020;Smith, 2004), paving the way for a more or less smooth translation. 14 The term translation, however, should not lead to taking translation lightly or to misunderstanding it as a mere automatic word replacement exercise (Lyberg et al., 2021). ...
Article
This review summarizes the current state of the art of statistical and (survey) methodological research on measurement (non)invariance, which is considered a core challenge for the comparative social sciences. After outlining the historical roots, conceptual details, and standard procedures for measurement invariance testing, the paper focuses in particular on the statistical developments that have been achieved in the last 10 years. These include Bayesian approximate measurement invariance, the alignment method, measurement invariance testing within the multilevel modeling framework, mixture multigroup factor analysis, the measurement invariance explorer, and the response shift-true change decomposition approach. Furthermore, the contribution of survey methodological research to the construction of invariant measurement instruments is explicitly addressed and highlighted, including the issues of design decisions, pretesting, scale adoption, and translation. The paper ends with an outlook on future research perspectives.
... For each item, half of the translators translated version a and the other half translated version b. This was due to a methodological setting in the larger project in which this study was embedded (Dorer 2020(Dorer , 2022. I triangulated think-aloud data with keylogging data gathered with Translog 2006 (Alves & Gonçalves 2003). ...
Article
English ‘gradation’ expressions—the interrogative adverb how followed by an adjective or an adverb—are frequent in questionnaires and they are not straightforward to translate into French. A good translation approach is needed, and there are mainly two options: The recommended approach involves the French adverbs à quel point or dans quelle mesure. The second approach uses a direct question, and the gradation occurs by choosing one of several options in the pre-established answer scale. Could there be a link between questionnaire translators’ expertise and the way they handle these translations, both the product and the process? This exploratory project studied six professional questionnaire translators, whose performance was recorded with concurrent think-aloud and keylogging techniques, and then triangulated. Correlations were analysed qualitatively. The results revealed a link between the questionnaire translators’ expertise and their translations. This confirms the importance of relying on expert translators and further briefing and training them to become good questionnaire translators.
... Questions can also be annotated for translation or specifically earmarked for adaptation (Behr & Scholz, 2011). Furthermore, pretesting and translatability assessmentsor alternatively advance translationshelp to assess the questionnaire's suitability for a multilingual and multicultural study before the source questionnaire is finalized (Acquadro et al., 2018;Dept et al., 2017;Dorer, 2020;Smith, 2004). The translatability criteria summarized in Aquadro et al. (2018) or the advance translation scheme by Dorer (2011;later updated in Dorer, 2020) highlight what can be considered when reviewing a source questionnaire for translatability (e.g., issues pertaining to culture, language or item construction). ...
Book
Full-text available
This open access book explores implications of the digital revolution for migration scholars’ methodological toolkit. New information and communication technologies hold considerable potential to improve the quality of migration research by originating previously non-viable solutions to a myriad of methodological challenges in this field of study. Combining cutting-edge migration scholarship and methodological expertise, the book addresses a range of crucial issues related to both researcher-designed data collections and the secondary use of “big data”, highlighting opportunities as well as challenges and limitations. A valuable source for students and scholars engaged in migration research, the book will also be of keen interest to policymakers.
... Questions can also be annotated for translation or specifically earmarked for adaptation (Behr & Scholz, 2011). Furthermore, pretesting and translatability assessmentsor alternatively advance translationshelp to assess the questionnaire's suitability for a multilingual and multicultural study before the source questionnaire is finalized (Acquadro et al., 2018;Dept et al., 2017;Dorer, 2020;Smith, 2004). The translatability criteria summarized in Aquadro et al. (2018) or the advance translation scheme by Dorer (2011;later updated in Dorer, 2020) highlight what can be considered when reviewing a source questionnaire for translatability (e.g., issues pertaining to culture, language or item construction). ...
Chapter
Full-text available
With record numbers of refugees and internally displaced persons in the world, it is more important than ever that policy makers, aid organizations, and advocacy groups have access to high-quality data about these vulnerable populations. However, refugee and internally-displaced persons settlements pose unique challenges to the selection of probability samples. These settlements can grow quickly, and registers often are not available or not up-to-date. Refugees who live in communities also are difficult to reach with a probability sample because they are hard to identify, contact, and interview. Drawing on recent data collection experiences, this chapter describes the sample designs that can address such challenges. We argue that the best sampling techniques are those that minimize interviewer discretion and contain built-in opportunities for verifying interviewer performance.
... Questions can also be annotated for translation or specifically earmarked for adaptation (Behr & Scholz, 2011). Furthermore, pretesting and translatability assessmentsor alternatively advance translationshelp to assess the questionnaire's suitability for a multilingual and multicultural study before the source questionnaire is finalized (Acquadro et al., 2018;Dept et al., 2017;Dorer, 2020;Smith, 2004). The translatability criteria summarized in Aquadro et al. (2018) or the advance translation scheme by Dorer (2011;later updated in Dorer, 2020) highlight what can be considered when reviewing a source questionnaire for translatability (e.g., issues pertaining to culture, language or item construction). ...
Chapter
Full-text available
This chapter focuses on specific challenges to surveying newly arrived immigrants with a focus on refugees. In addition to the need to provide interviews for immigrants in their native language, it must be taken into account that a considerable proportion of this group has poor or no reading skills in their native language. Two strategies can be used to avoid systematically excluding this population: offering interviews with native-speaking interviewers or using computer-assisted self-interviewing (CASI) with additional audio files that enable respondents to listen to a questionnaire. We discuss the pros and cons of both strategies. Subsequently, using the data from the first wave of the German refugee study ReGES, in which both strategies were offered as a combined approach, we consider their effectiveness and practicability in more detail. Although native-speaking interviewers can increase cooperation and help to not exclude illiterate individuals, they also can encourage a higher social desirability bias. However, illiterate interviewees are more likely to take advantage of the interviewer’s support to read the questions aloud than to use the audio files. Nevertheless, we also found that a small but substantial subgroup of interviewees with little or no reading skills used the audio files often.
... Questions can also be annotated for translation or specifically earmarked for adaptation (Behr & Scholz, 2011). Furthermore, pretesting and translatability assessmentsor alternatively advance translationshelp to assess the questionnaire's suitability for a multilingual and multicultural study before the source questionnaire is finalized (Acquadro et al., 2018;Dept et al., 2017;Dorer, 2020;Smith, 2004). The translatability criteria summarized in Aquadro et al. (2018) or the advance translation scheme by Dorer (2011;later updated in Dorer, 2020) highlight what can be considered when reviewing a source questionnaire for translatability (e.g., issues pertaining to culture, language or item construction). ...
Chapter
Full-text available
This chapter examines the technical challenges involved in translating and adapting measurement instruments, i.e., questionnaires, for migration research. The first part outlines good practices in questionnaire translation. In line with the technology-based focus of this book, the second part focuses on computerized surveys and on the interplay between technology, language, and culture. Frameworks from the software localization field are consulted and transferred to the context of computerized multilingual surveys with respect to their impact on source questionnaire design and on translation and adaptation. Real-life examples come from our own experiences in international and migration research, as well as from a review of existing reports and research articles. The main goal of this chapter is to raise awareness of the additional technology layer that impacts translation and adaptation, with an ultimate goal to improve translation and adaptation processes, and the outcomes of migration research.
... Questions can also be annotated for translation or specifically earmarked for adaptation (Behr & Scholz, 2011). Furthermore, pretesting and translatability assessmentsor alternatively advance translationshelp to assess the questionnaire's suitability for a multilingual and multicultural study before the source questionnaire is finalized (Acquadro et al., 2018;Dept et al., 2017;Dorer, 2020;Smith, 2004). The translatability criteria summarized in Aquadro et al. (2018) or the advance translation scheme by Dorer (2011;later updated in Dorer, 2020) highlight what can be considered when reviewing a source questionnaire for translatability (e.g., issues pertaining to culture, language or item construction). ...
Chapter
Full-text available
Internet surveys are the future of migration studies given that migrants engage more and more often in multidirectional movements and reside in multiple destination countries. The richness of the growing variety of geographical and temporal migrant trajectories pose particular challenges for quantitative researchers studying such spatially dispersed populations for which sampling frames are not available. The Web-based Respondent Driven Sampling (RDS) method addresses many of the challenges occurring in such a context. However, its implementation is not an easy task and does not succeed in all migratory settings. The goal of this chapter is to outline the opportunities and challenges associated with using Web-based RDS for researching migrant populations. While the RDS method can be powerful in fact-to-face interviews, its usefulness in Internet surveys is debatable. We examine this issue by using the example of a survey of Polish multiple migrants worldwide conducted in 2018–2019. We outline observations from the fieldwork (selection of seeds, formation of referral chains, etc.), and discuss the challenges of using Web-based RDS by focusing on the barriers to referral chain formation related to RDS assumptions and study design. The observed constraints relate to the definition of a target group, the management of incentives online, and the anonymity issues of online surveys.
... Questions can also be annotated for translation or specifically earmarked for adaptation (Behr & Scholz, 2011). Furthermore, pretesting and translatability assessmentsor alternatively advance translationshelp to assess the questionnaire's suitability for a multilingual and multicultural study before the source questionnaire is finalized (Acquadro et al., 2018;Dept et al., 2017;Dorer, 2020;Smith, 2004). The translatability criteria summarized in Aquadro et al. (2018) or the advance translation scheme by Dorer (2011;later updated in Dorer, 2020) highlight what can be considered when reviewing a source questionnaire for translatability (e.g., issues pertaining to culture, language or item construction). ...
Chapter
Full-text available
Choosing a methodology for migrant surveys usually is a complicated issue for a number of reasons, including the lack of information about sampling frames, and migrants’ status as a hard-to-reach population. The spread of social media usage among migrants has led researchers to look at the potential that Social Networking Sites (SNS) have for migration studies with respect to extracting and analyzing big data, conducting ethnography online, and reaching migrant respondents through SNS advertising. While the advantages of sampling migrants using SNS and surveying them online are clear, the drawbacks of this method—and, even more so, the potential solutions—constitute an almost unexplored field. In this chapter, we address one of the most significant challenges of using this strategy by exploring the biases it may present and the possible ways to resolve them. We use data from five SNS-based migrant surveys conducted during 2016–2018 with various groups of migrants and their adult children (second generation migrants) from Central Asian and Transcaucasian countries in Russia (with N varying from 302 to 12,524). After describing the procedure of surveying migrants with targeting on SNS, we outline the major biases, delineate possible solutions, and demonstrate how some of them—namely weighting based on dropout analysis and external validation—can work regarding the material from one of the surveys. We conclude that, at present, the range of biases remains more considerable than our opportunities to adjust for them, and so it may be time to concede this, and instead direct research efforts to exploring other approaches to data analysis and presentation that are more suitable for contexts of uncertainty—for example, fuzzy set theory and Bayesian statistics. This chapter contributes to the advancement of the emerging field of “tech-savvy” migration studies while signposting its bottlenecks and gains, as well as laying out directions for future research.
... Questions can also be annotated for translation or specifically earmarked for adaptation (Behr & Scholz, 2011). Furthermore, pretesting and translatability assessmentsor alternatively advance translationshelp to assess the questionnaire's suitability for a multilingual and multicultural study before the source questionnaire is finalized (Acquadro et al., 2018;Dept et al., 2017;Dorer, 2020;Smith, 2004). The translatability criteria summarized in Aquadro et al. (2018) or the advance translation scheme by Dorer (2011;later updated in Dorer, 2020) highlight what can be considered when reviewing a source questionnaire for translatability (e.g., issues pertaining to culture, language or item construction). ...
Chapter
Full-text available
In an era in which unprecedented data on migration are collected, accessing and using it, as well as understanding the kinds of questions that can be engaged with it, often are under-examined. In this chapter, using the Canadian case, we assess how census and survey data collected by national statistics agencies, administrative data, and other data sources can be used in an unprecedented era of migration and data gathering. The chapter explores issues of data access, consistency of units of analysis and concepts, technical skill deficits, and what is missed in existing data sources. Finally, we assess the need for creating data spines and common protocols. Overall, we offer insights from our navigation of the Canadian data ecosystem and a practical assessment of what can be done with different types of data regarding the researching of migration and immigrant settlement.
... Questions can also be annotated for translation or specifically earmarked for adaptation (Behr & Scholz, 2011). Furthermore, pretesting and translatability assessmentsor alternatively advance translationshelp to assess the questionnaire's suitability for a multilingual and multicultural study before the source questionnaire is finalized (Acquadro et al., 2018;Dept et al., 2017;Dorer, 2020;Smith, 2004). The translatability criteria summarized in Aquadro et al. (2018) or the advance translation scheme by Dorer (2011;later updated in Dorer, 2020) highlight what can be considered when reviewing a source questionnaire for translatability (e.g., issues pertaining to culture, language or item construction). ...
Chapter
Full-text available
In this chapter, we estimate human mobility between countries worldwide on the basis of global statistics on tourism and air passenger traffic. Adjusting and merging the data from these two sources through a simple set of procedures enabled us to counter some of their individual limitations. The resulting open-access dataset, which covers more than 15 billion estimated trips during the years 2011 to 2016, promises to be a comprehensive new resource on transnational human mobility worldwide. In this chapter, we illustrate the data characteristics and transformations adopted in creating this dataset. We explore potential applications and discuss the remaining caveats. We conclude with several lessons from our endeavor that might be useful for researchers who wish to engage in similar data-merging procedures.
... Finally, we wish to stress that, in general, increased levels of translation problems can be prevented or at least mitigated if the source instruments are developed in a manner that ensures cultural relevance, comparability, and translatability (Smith 2004;Behr and Scholz 2011;Dorer 2020). ...
Article
When it comes to quality in questionnaire translation and hence comparability in comparative research, the chosen translation method is crucial for the outcome. Few empirical studies compare different translation methods – a fact, which is often deplored in the research community. To fill the gap, in this research, the team approach is compared against a simple back translation approach. Starting point were in both cases the initial English-German translations of ISSP (International Social Survey Program) questions. As regards a textual assessment, the final translations from both approaches were assessed, with a focus on how translation issues, such as mistranslations or wording issues, identified in the initial translations were dealt with. While none of 29 issues in the initial translation were any longer present in the final team translation version, 22 of these issues were still present in the final version after the back translation approach. As regards a quantitative test, for a selected number of items, we ran a split-ballot experiment in a web survey. For only 5 out of 15 items (33%) that went into the experiment, we found significant differences between the translations, and only one could clearly be attributed to remaining errors in the back translation version. In sum, the final translation from the team approach clearly outperformed the final translation from the back translation approach when it comes to text-based criteria (in particular accuracy and fluency). The quantitative test showed that many translation issues (those remaining in the translation after the back translation step) had no effect on the statistics. Nevertheless, we ask respondents to put effort into survey responding; in the same vein, we as researchers should put effort in the survey experience by providing questions that are clearly worded and free of errors, which puts the team approach ahead of the back translation approach.
... In CTIS in particular, research methods are improving and they are now in focus, beyond the mere renewal of data-collection tools (e.g., Vieira, 2017;Gumul, 2020;Han & An, 2020;Mellinger & Hanson, 2020). After more than 50 years of striving to objectivize purported equivalent effects, the expanding empirical work on the reception of M2 communication products is most welcome (O'Hagan & Flanagan, 2018;Walker, 2019;Griebel, 2020;Liao, et al., 2020;Nurminen, 2020;Rojo, et al., 2021), even though some areas, such as translating tests and questionnaires, are still mostly in the hands of bilingual researchers of given specialties (but see Behr &Sha, 2018 andDorer, 2020). There is a renewed interest in both professional concerns (Roziner & Shlesinger, 2010;Bundgaard, et al., 2016;Teixeira & O'Brien, 2017;Plevoets & Defrancq, 2018;Risku, et al., 2019) and evidence-based training , Kappus & Ehrensberger-Dow, 2020, Dawson & Romero-Fresco, 2021. ...
Chapter
Full-text available
In this chapter, we contend that cognitive translation and interpreting studies (CTIS) is an applied science because it employs the scientific method to study a socially defined object. We further argue that applied sciences share some traits, not only in their ways and goals, but also in their structure and the ways they evolve. We will thus compare CTIS in some respects to another applied science that we are all acquainted with, namely, medicine. We will draw parallels between these two fields with respect to their quest for legitimacy as both (applied) sciences and disciplines. We will also reflect on the epistemic nature of their respective bodies of knowledge, and we will discuss their takes on epistemic issues resulting from borrowing from other disciplines, dealing with fuzzy concepts, and addressing opposed theories and constructs that aim to explain the very same phenomena. We close by endorsing the need for epistemic pluralism in CTIS that, in turn, we warn, excludes epistemic approaches based on relativism.
... We developed a SBE instrument for abortion in English and Spanish using an iterative team-based translation approach and parallel development that allowed for modifications to the source instrument. 16,17 The survey assessed participants' abortion-related behavioral, control, and normative beliefs as per RAA. Table 2 shows a list of questions included in the SBE survey. ...
Article
Objective: Salient belief elicitations (SBEs) measure beliefs toward a health behavior through open-ended questions, with the purpose of developing close-ended survey questions. Auxiliary verbs used in SBE questions often differ (eg, What are the top 3 reasons you would/should decide to have an abortion?). We tested how 2 auxiliary verbs function in a SBE assessing abortion in English and Spanish: would/decidíra and should/debería. Methods: We administered a SBE survey online (N = 175) and in-person (N = 72); in-person participants also participated in cognitive interviews to assess question interpretation. Participants were assigned to survey versions that included identical SBE questions aside from auxiliary verbs—would/decidíra versus should/debería. Data analysis included: (1) content analysis of survey responses to assess differences in responses by version and (2) thematic analysis of interview data focused on interpretations of would/decidíra and should/ debería. Results: Would/decidíra surveys generated more response categories. Similarly, cognitive interview findings suggest participants conceptualized would/decidíra as allowing for more options, while should/debería was thought to include only the most significant reasons/circumstances for abortion, potentially restricting participants’ responses. Conclusion: These findings have important measurement implications for researchers administering SBEs
... We decided on a research design that was as close as possible to the typical translation process setting in large-scale survey projects such as the ESS. These surveys usually employ a so-called team or committee approach, which includes the TRA steps of the TRAPD (translation, review, adjudication, pretesting, and documentation) translation process (Dorer 2020;Harkness 2003). To mimic real-life situations in the survey business, we relied on translation teams translating the items (instead of reyling on systematically modified translations with varying degrees of closeness manipulated step by step). ...
Article
To challenge the commonly made assumption in cross-national survey projects that close translation yields more comparable data than adaptation, we implemented a translation experiment in the CROss-National Online Survey Panel. The English source questionnaire was split into three batches of 20 items each and was translated by three translation teams into Estonian and three teams into Slovene. The teams received specific instructions on how to translate each batch (either closely or adaptively) so that, by design, the teams translated two batches following one approach and one following the other approach. Respondents in the two countries (Estonia and Slovenia) were randomly assigned to three distinct questionnaire versions based on the same source questionnaire, each consisting of translations by all three teams and including close and adaptive translations. We developed an analytical framework to assess the translation potential of the source items (i.e., all theoretically possible translations of a specific item) and the actual translation scores (i.e., the degree of closeness vs. adaptiveness of a specific translation). We show that some items are more sensitive to the wording (small linguistic changes result in a different response behavior) while others are more robust (the meaning of the concept is retained despite linguistic changes).
Chapter
Survey Design and Implementation Considerations in International and Cross-National Research
Article
Questionnaires are a specific text type with its own challenges for translation. Translating answers to open-ended questions, where respondents answer in their own words and not with a predefined set of answers, is a particular endeavour. This paper is about translations of answers to open-ended questions with the aim of developing recommendations for this task. Embedded in a web survey project, answers about respondents' understanding of the terms "left" and "right", provided in Canada, Spain and the US, were translated into German. I analysed these translations, in cooperation with native speakers from all three countries, and coded my findings into a translation error and issue coding scheme. Context plays a crucial role for such translations, and it is important that the translators have a very good knowledge and understanding of the country to which the answers refer and of the topic (in my case the political and social landscape in the respective country).
Technical Report
Full-text available
Comparative surveys are surveys that study more than one population with the purpose of comparing various characteristics of the populations. The purpose of these types of surveys is to facilitate research of social phenomena across populations, and, frequently, over time. Researchers often refer to comparative surveys that take place in multinational, multiregional, and multicultural contexts as “3MC” surveys. To achieve comparability, these surveys need to be carefully designed according to state-of-the-art principles and standards. The main purposes of this task force report, commissioned jointly by the American Association for Public Opinion Research (AAPOR) and the World Association for Public Opinion Research (WAPOR) are to identify the most pressing challenges concerning data quality, promote best practices, recommend priorities for future study, and foster dialogue and collaboration on 3MC methodology. The intended audience for this report includes those involved in all aspects of 3MC surveys including data producers, data archivists, data users, funders and other stakeholders, and those who wish to know more about this discipline.
ResearchGate has not been able to resolve any references for this publication.