Science topic
Translation - Science topic
Explore the latest questions and answers in Translation, and find Translation experts.
Questions related to Translation
I am doing a research on the topic "the use of language in monologues in Raditladi's Setswana translation of Shakespeare's Macbeth" and I am using a descriptive translation theory that is why under my data analysis I want to use comparative content analyses.
since technology is increasing rapidly and everyday new AI applications appear in different areas of study, but concerning translation will it replace human.
Dear colleagues,
I would appreciate it if you could explain the difference between translation criticism and translation quality assessment in two or three sentences.
Thank you!
Can translation be simultaneously viewed as part of linguistics and an independent science?
Over the past year, I participated in a machine learning research project in clinical medicine and discovered a fascinating problem. The task involves using six years of clinical patient data to predict patients' survival periods for personalized treatment. However, this task is fundamentally different from previous AI projects I worked on in imaging or communications.
Key Challenges:
- Using Linear Regression to predict survival time would be problematic if focusing solely on survival duration. For instance, patients with the same predicted survival time might have conflicting outcomes—some alive and others deceased.
- Using Logistic Regression to predict binary outcomes (alive/dead) fails to capture survival duration, rendering it less meaningful (e.g., "everyone dies eventually in 100 years").
Questions:
- What better AI models are suitable for this scenario while maintaining interpretability? i try to train a classify model & a regression model for it.
- What qualitative metrics can evaluate model performance? Current classification metrics (e.g., ROC, AUC) and regression metrics (e.g., R², MSE) are inadequate for this task.
Translation Notes:
- Survival period: Translated as "survival period" or "survival time" depending on context.
- Personalized treatment: Standard term in clinical AI.
- Interpretability: Emphasizes the need for models like Cox Proportional Hazards, survival trees, or explainable deep learning variants.
- Qualitative metrics: Refers to survival-specific metrics like C-index, time-dependent AUC, or Brier Score (explained in the previous answer).
Let me know if you need further elaboration on specific terms!
features :
Year of diagnosis ID Name Gender Age p/s CNSL pathological type double expression GCB/ABC diagnosis to antitumor time (d) lesion location CSF protein (mg/L,≤450) KPS ECOG IELSG MSKCC serum β2-MG (ng/ml) WBC (×10^9/L) ANC (×10^9/L) ALC (×10^9/L) AMC (×10^9/L) HGB (g/L) RDW (%) PLT (×10^9/L) platelet-lymphocyte ratio (PLR) lymphocyte-monocyte ratio (LMR) albumin (g/L) globulin (g/L) albumin-globulin ratio (AGR) LDH platelet-albumin ratio (PAR) LDH-lymphocyte ratio (LLR) EBV_viremia(IU/ml) Excision/Navigation Radiation FISH(BCL6) FISH(c-myc) Cycel 1 Cycel 2 Cycel 3 Cycel 4 Cycel 5 Cycel 6 ASCT Response before ASCT Course_before_ASCT Conditioning regimen Response after ASCT
labels:
Alive Alive_code Survival time (d) Survival time (m)
This was a brief exchange between myself and ChatGPT, which I have been utilising for the first time on a new translation project.
The points are self-explanatory (please excuse the typo in the screenshot), but as the risk of voicing an unpopular view, I'd like to hear what the community thinks about translation software programs in terms of their pros and cons, and the need (if any) for them.


It is enough to know both languages and the linguistic ability which will enable you to practice translation as a profession. On the other hand, teaching translation is different, as (Gabris, 2000) states that a translation teacher must have knowledge and experience in translation and the ability to teach it.
The proliferation of multilingual information across the globe necessitates robust and adaptable machine translation (MT) systems. The recent advancement of large language models (LLMs) performed a major transformation that enabled them to understand and produce text across different languages with remarkable skills. The paper evaluates current LLM-based multilingual translation practices while discussing approaches alongside obstacles while outlining prospective developments. The research examines architectural approaches along with training methods and evaluation criteria that evaluate advancements in obtaining global cross-lingual communication systems.
The Rise of Multilingual Language Models
NLP experienced a seminal transformation because of enormous pre-trained language models [4]. The models mBERT [10], XLM [1] and their successors excel at cross-lingual understanding and generation tasks. These models obtain their learning capability by undergoing pre-training using extensive multilingual document collections which allows them to develop unified linguistic components spread across various language systems. Zero-shot cross-lingual transfer becomes possible through shared representations because one model can easily work with different languages without explicit retraining [3, 15].
Multilingual models achieve their premier value from the natural connections that exist between languages according to research [1]. FILTER [1] enables better cross-lingual language understanding through a finetuning process that applies cross-lingual data fusion followed by independent language encodings and a subsequent fusion step for extracting multilingual knowledge. This research outlines an embedding alignment process that measures sentence similarity through pretrained monolingual embedding models for designing soft labels from text similarities [2].
The actual deployment of multilingual machine translation presents multiple complexities based on [4]. Studies indicate that multilingual models achieve success with unknown additional benefits that do not directly contribute to cross-lingual knowledge transfer. The LLM performance shows substantial variations between different languages according to existing research [5].
Architectures and Training Strategies
The achievement of LLM-based multilingual translation depends critically on both the allocated framework architecture as well as training oversight protocols. The transformer-based model architecture along with its attention mechanism has become the leading architecture in the field [10]. The architectural design learns deep associations spanning long sequences and word connections between languages effectively [10].
Pre-training is a crucial step in training LLMs for multilingual translation [3, 6]. Models are typically pre-trained on massive multilingual corpora using tasks like masked language modeling and next sentence prediction [6]. This pre-training phase allows the model to learn general linguistic knowledge and develop a shared understanding of different languages. After pre-training, the model is fine-tuned on specific translation tasks, using parallel corpora or other forms of supervision [1, 3].
Several techniques have been developed to improve the effectiveness of pre-training and fine-tuning. Mixed-lingual pre-training leverages both cross-lingual and monolingual tasks to improve the model's language modeling capabilities [6]. This approach allows the model to benefit from the abundance of monolingual data to enhance its language modeling capabilities, while also learning cross-lingual relationships through translation tasks. Contrastive learning has also been employed, where the model is trained to learn similar representations for sentences that are translations of each other [2, 14].
Zero-Shot and Low-Resource Translation
A significant advantage of LLMs is their ability to perform zero-shot translation, translating between language pairs without any direct training data [3, 15]. This capability is particularly valuable for low-resource languages, where parallel corpora are scarce.
However, the performance of zero-shot translation can be limited [13]. Translation quality often suffers, especially for languages with significant linguistic differences. To address this, researchers have explored various strategies to improve zero-shot translation. One approach involves augmenting the model with additional knowledge, such as bilingual dictionaries or cross-lingual word embeddings [18]. Another strategy is to use images as pivots, enabling the model to learn translations by associating words with visual concepts [11].
For low-resource languages, translation-based approaches have shown promise [13]. These methods involve translating the source language training data and the target language test instances, enhancing the model's ability to learn from limited resources [13]. Furthermore, techniques such as optimal transport distillation can be used to transfer knowledge from high-resource to low-resource languages [16].
Cross-Lingual Transfer for Downstream Tasks
The benefits of multilingual LLMs extend beyond direct translation. They have also proven effective in cross-lingual transfer for various downstream tasks, such as question answering, summarization, and information extraction [1, 9, 6, 7].
In cross-lingual question answering, the goal is to answer questions in one language using information from another language [9, 14]. LLMs can be fine-tuned on question-answering datasets in high-resource languages and then applied to low-resource languages, leveraging the shared representations learned during pre-training [14]. Techniques like MuCoT [14] augment the QA samples of the target language using translation and transliteration to improve performance. XOR QA [9] enables questions from one language to be answered via answer content from another, addressing both information scarcity and asymmetry.
Cross-lingual summarization aims to generate a summary in one language for a document written in another [6, 8]. LLMs can be trained to generate summaries in the target language by leveraging the information from the source language document [6]. The mixed-lingual pre-training approach has proven effective in cross-lingual summarization, where the model learns to generate summaries by leveraging both cross-lingual tasks, such as translation, and monolingual tasks, such as masked language models [6].
Cross-lingual open information extraction (OIE) seeks to extract structured information from text across multiple languages [7]. MT4CrossOIE [7] uses a multi-stage tuning framework to enhance cross-lingual OIE by injecting language-specific knowledge into a shared model. This framework uses language-specific modules and prompting techniques to improve performance.
Enhancements and Applications
The field of LLM-based multilingual translation is continuously evolving, with researchers exploring various enhancements and applications.
One area of focus is improving the interpretability of LLMs [10]. Understanding how these models make decisions is crucial for building trust and ensuring their responsible use. Studies have investigated the role of attention heads in Transformer-based models, revealing that pruning certain heads can improve performance in cross-lingual and multilingual tasks [10].
Another area of research is the development of more sophisticated prompting strategies [5, 20]. Prompting involves providing the model with specific instructions or examples to guide its behavior. Multi-Lingual Prompt (MLPrompt) [20], for example, automatically translates error-prone rules into another language to improve LLMs' reasoning and understanding. Contrastive alignment instructions (AlignInstruct) [17] emphasizes cross-lingual supervision via a cross-lingual discriminator, improving translation quality in unseen and low-resource languages.
LLMs are also being applied to cross-lingual plagiarism detection [12]. By simulating word embeddings, models can detect plagiarism by reproducing the predictions of online machine translators, even when translated texts are replaced with synonyms [12].
The use of LLMs in multi-modal tasks, such as image captioning, is also gaining traction [8, 19]. Unpaired cross-lingual image caption generation uses self-supervised rewards to address the lack of paired image-caption data for different languages [19].
Challenges and Limitations
- Despite the remarkable progress, several challenges and limitations remain in LLM-based multilingual translation.
- One significant challenge is the issue of language bias [16]. Multilingual models are often trained on datasets that are skewed towards certain languages, leading to performance disparities. This bias can negatively impact the quality of translations, especially for low-resource languages.
- Another challenge is the lack of interpretability [4]. While researchers are making progress in this area, understanding how LLMs make translation decisions remains difficult. This lack of transparency can hinder the development of more reliable and trustworthy translation systems.
- The computational cost of training and deploying LLMs is also a concern [4]. Large models require significant resources for training and inference, making them expensive to develop and maintain.
Furthermore, the quality of translations can still be imperfect, particularly for complex or nuanced text [4, 19]. LLMs may struggle with idiomatic expressions, cultural references, and other subtleties of human language.
Finally, ethical considerations are crucial [4]. The use of LLMs for translation raises concerns about privacy, bias, and misinformation. It is essential to develop and deploy these technologies responsibly, ensuring that they are used to promote understanding and communication, not to exacerbate existing inequalities or spread harmful content.
Future Directions
The field of LLM-based multilingual translation is poised for continued innovation. Several promising directions for future research include:
- Improving Language Fairness: Developing techniques to mitigate language bias and ensure equitable performance across all languages. This could involve using more balanced training datasets, incorporating techniques to explicitly address language bias during training, or developing methods to adapt models to specific language characteristics.
- Enhancing Interpretability: Improving the interpretability of LLMs to understand how they make translation decisions. This could involve developing techniques to visualize attention mechanisms, identify key features used for translation, or create more transparent model architectures.
- Developing More Efficient Models: Reducing the computational cost of training and deploying LLMs. This could involve exploring model compression techniques, developing more efficient architectures, or leveraging hardware accelerators.
- Improving Translation Quality: Enhancing the quality of translations, particularly for complex or nuanced text. This could involve developing more sophisticated prompting strategies, incorporating external knowledge sources, or using reinforcement learning to optimize translation quality.
- Advancing Zero-Shot and Low-Resource Translation: Developing more effective techniques for zero-shot and low-resource translation. This could involve exploring new pre-training objectives, developing better methods for cross-lingual transfer, or leveraging techniques like meta-learning. Further research can be done on techniques such as Wikily-supervised translation models [18], which can achieve high BLEU scores in low-resource languages.
- Integrating Multimodal Information: Incorporating multimodal information, such as images and audio, to improve translation quality and expand the scope of translation tasks [8, 11, 19]. This could involve developing models that can translate text in the context of images, videos, or other modalities.
- Addressing Ethical Concerns: Developing and deploying LLM-based translation technologies responsibly, addressing issues of privacy, bias, and misinformation. This could involve developing guidelines for responsible use, creating tools to detect and mitigate bias, and promoting transparency and accountability.
In conclusion, LLMs have revolutionized multilingual translation, offering unprecedented capabilities in cross-lingual communication. While challenges remain, the field is rapidly evolving, with ongoing research focused on addressing limitations and expanding the scope of these technologies. The future of multilingual translation is bright, with the potential to unlock new opportunities for global communication and collaboration.
==================================================
References
- Yuwei Fang, Shuohang Wang, Zhe Gan, Siqi Sun, Jingjing Liu. FILTER: An Enhanced Fusion Method for Cross-lingual Language Understanding. arXiv:2009.05166v3 (2020). Available at: http://arxiv.org/abs/2009.05166v3
- Minsu Park, Seyeon Choi, Chanyeol Choi, Jun-Seong Kim, Jy-yong Sohn. Improving Multi-lingual Alignment Through Soft Contrastive Learning. arXiv:2405.16155v2 (2024). Available at: http://arxiv.org/abs/2405.16155v2
- Zewen Chi, Li Dong, Furu Wei, Wenhui Wang, Xian-Ling Mao, Heyan Huang. Cross-Lingual Natural Language Generation via Pre-Training. arXiv:1909.10481v3 (2019). Available at: http://arxiv.org/abs/1909.10481v3
- Tom Kocmi, Dominik Macháček, Ondřej Bojar. The Reality of Multi-Lingual Machine Translation. arXiv:2202.12814v1 (2022). Available at: http://arxiv.org/abs/2202.12814v1
- Xiang Zhang, Senyu Li, Bradley Hauer, Ning Shi, Grzegorz Kondrak. Don't Trust ChatGPT when Your Question is not in English: A Study of Multilingual Abilities and Types of LLMs. arXiv:2305.16339v2 (2023). Available at: http://arxiv.org/abs/2305.16339v2
- Ruochen Xu, Chenguang Zhu, Yu Shi, Michael Zeng, Xuedong Huang. Mixed-Lingual Pre-training for Cross-lingual Summarization. arXiv:2010.08892v1 (2020). Available at: http://arxiv.org/abs/2010.08892v1
- Tongliang Li, Zixiang Wang, Linzheng Chai, Jian Yang, Jiaqi Bai, Yuwei Yin, Jiaheng Liu, Hongcheng Guo, Liqun Yang, Hebboul Zine el-abidine, Zhoujun Li. MT4CrossOIE: Multi-stage Tuning for Cross-lingual Open Information Extraction. arXiv:2308.06552v2 (2023). Available at: http://arxiv.org/abs/2308.06552v2
- Yash Verma, Anubhav Jangra, Raghvendra Kumar, Sriparna Saha. Large Scale Multi-Lingual Multi-Modal Summarization Dataset. arXiv:2302.06560v1 (2023). Available at: http://arxiv.org/abs/2302.06560v1
- Akari Asai, Jungo Kasai, Jonathan H. Clark, Kenton Lee, Eunsol Choi, Hannaneh Hajishirzi. XOR QA: Cross-lingual Open-Retrieval Question Answering. arXiv:2010.11856v3 (2020). Available at: http://arxiv.org/abs/2010.11856v3
- Weicheng Ma, Kai Zhang, Renze Lou, Lili Wang, Soroush Vosoughi. Contributions of Transformer Attention Heads in Multi- and Cross-lingual Tasks. arXiv:2108.08375v1 (2021). Available at: http://arxiv.org/abs/2108.08375v1
- Shizhe Chen, Qin Jin, Jianlong Fu. From Words to Sentences: A Progressive Learning Approach for Zero-resource Machine Translation with Visual Pivots. arXiv:1906.00872v1 (2019). Available at: http://arxiv.org/abs/1906.00872v1
- Victor Thompson. Detecting Cross-Lingual Plagiarism Using Simulated Word Embeddings. arXiv:1712.10190v2 (2017). Available at: http://arxiv.org/abs/1712.10190v2
- Benedikt Ebing, Goran Glavaš. To Translate or Not to Translate: A Systematic Investigation of Translation-Based Cross-Lingual Transfer to Low-Resource Languages. arXiv:2311.09404v2 (2023). Available at: http://arxiv.org/abs/2311.09404v2
- Gokul Karthik Kumar, Abhishek Singh Gehlot, Sahal Shaji Mullappilly, Karthik Nandakumar. MuCoT: Multilingual Contrastive Training for Question-Answering in Low-resource Languages. arXiv:2204.05814v1 (2022). Available at: http://arxiv.org/abs/2204.05814v1
- Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman Vural, Kyunghyun Cho. Zero-Resource Translation with Multi-Lingual Neural Machine Translation. arXiv:1606.04164v1 (2016). Available at: http://arxiv.org/abs/1606.04164v1
- Zhiqi Huang, Puxuan Yu, James Allan. Improving Cross-lingual Information Retrieval on Low-Resource Languages via Optimal Transport Distillation. arXiv:2301.12566v1 (2023). Available at: http://arxiv.org/abs/2301.12566v1
- Zhuoyuan Mao, Yen Yu. Tuning LLMs with Contrastive Alignment Instructions for Machine Translation in Unseen, Low-resource Languages. arXiv:2401.05811v2 (2024). Available at: http://arxiv.org/abs/2401.05811v2
- Mohammad Sadegh Rasooli, Chris Callison-Burch, Derry Tanti Wijaya. "Wikily" Supervised Neural Translation Tailored to Cross-Lingual Tasks. arXiv:2104.08384v2 (2021). Available at: http://arxiv.org/abs/2104.08384v2
- Yuqing Song, Shizhe Chen, Yida Zhao, Qin Jin. Unpaired Cross-lingual Image Caption Generation with Self-Supervised Rewards. arXiv:1908.05407v1 (2019). Available at: http://arxiv.org/abs/1908.05407v1
- Teng Wang, Zhenqi He, Wing-Yin Yu, Xiaojin Fu, Xiongwei Han. Large Language Models are Good Multi-lingual Learners : When LLMs Meet Cross-lingual Prompts. arXiv:2409.11056v1 (2024). Available at: http://arxiv.org/abs/2409.11056v1
I have the task of translating a research study into Arabic and ensuring the correctness of the wording and translation.
One major limitation of machine translation is its reliance on direct word-to-word translation, which often fails when dealing with legal terms that require contextual adaptation.
Many times when some people try to translate by using artificial intelligence some errors happens in translation
Hello everyone,
I am modelling a B+G+5 building. I was getting everything correct. The pre and post analysis checks all were correct. But due to some new revisions . Now after applying post analysis check, in my mass participation ratios, translation and rotation in first 3 modes is nearly zero . So i want some suggestions to improve my mass participation ratios . I am also attaching the mass participation table from Etabs.
Thank you

Hello,
Is there a way to translate an article from Spanish to English?
Polysemy can complicate translation because the same word may have different meanings depending on the context. Translators must rely on contextual clues, collocations, and cultural background to determine the intended meaning.
Historical context plays a crucial role in shaping the meaning of words, as language evolves alongside cultural, social, and political changes. Words may acquire new meanings, shift in connotation, or become obsolete due to historical events, technological advancements, or societal transformations. In translation, understanding historical context is essential for accurately conveying meaning, as a word’s interpretation in one era may differ significantly from its modern usage.
For instance, the Arabic word "عامل" (ʿĀmil) once meant "governor" in Yemen around 70 years ago. However, over time, it has lost this meaning, and its modern usage refers to a "laborer." Without considering historical subtleties, translators risk misrepresenting texts, leading to misunderstandings or distortions of the original message.
Therefore, historical context is crucial in ensuring linguistic accuracy and cultural authenticity in translation.
The translation of politically charged words like "freedom" and "justice" are deeply influenced by the political context in which they are used. These terms do not carry fixed, universal meanings but rather reflect ideological perspectives shaped by history, power dynamics, and cultural narratives. What one group considers "freedom" may be seen by another as oppression, and "justice" may take different forms depending on legal, social, or political frameworks.
What are the significant technical obstacles in the development of instantaneous speech translation tools?
I would appreciate your insights on this question. Could you please share your thoughts?
Dear colleagues,
What (more recent) model or translation strategies do you recommend for investigating and analyzing the translation of lexical collocations used in a literary novel?
Thank you.
SEMANTIC GAPS AND SOURCES OF NEW WORDS
Whenever there is a paradigm shift because of changing technology, religion, politics, culture, etc., new concepts are brought into the language. And when there are no words to talk about these new concepts, then new words must enter the language.
This PowerPoint gives examples of semantic gaps and the linguistic processes that are used to fill these semantic gaps: Borrowing, Loan Translation, Shift in Denotation or Connotation, Metaphorical Shift, Suffixation, Prefixation, Compounding, Clipping, Blending, Back Formation, Acronyming, Metathesis, Onomatopoeia, Reduplication, and Part of Speech Change. We also discuss “Sniglets.”
There are many models for assessing translation and interpreting quality, however, each model assign different weights for the same criterion. Also, the theories that address the processes of translation and interpreting seem to be in isolation from the real hindrances that translators and interpretrs may face.
Does anyone have the instruction manual for the inverted microscope IM35 ICM405, in a format that could be translated by Google Translate?
Thanks.
Best, J
I recently came across an anatomy text by Carl Moller that was published in 1915 but it is in German or Dutch neither of which I can understand. I would like to know if there is an English translation of the text somewhere that I can obtain.
ZUR VERGLEICHENDEN ANATOMIE DER SILURIDEN by Carl Moller, 1915
Thanks for the help.
James E. Burgess
Translation from Energy-Addition to Energy-Transition: Not Feasible before 2030?
1. Despite the demand for natural gas remaining flat, the crude demand may continue to be over 100 million barrels per day; and at least 70% of the global energy mix would remain to be fossil fuels, at least until 2030.
2. As on date, fossil fuels constitute nearly 81.5% of primary energy consumption, despite the fact that renewable energy grew @ 6 times the rate of total primary energy.
3. Global coal consumption reached an all-time high of 8.7 billion metric tons in 2024; and nearly, the same trend is expected to continue at least until 2030.
4. Global energy consumption increased by 2% (by 12.3 exajoules) from 2022.
[7.8 exajoules contribution from fossil fuels, and, 4.5 exajoules contribution from renewable energy]
5. Global emissions rose by 2.1%, crossing 40 billion metric tons of CO2 equivalents for the first time.
Suresh Kumar Govindarajan
20-Dec-2024
I want some examples of famous books where the titles were significantly altered in translation.
Why are book titles often changed when translated into another language?
I am interested in learning about the most effective strategies and approaches for optimizing the translation process using artificial intelligence to enhance accuracy, efficiency, and overall translation quality, particularly between English and Arabic.
The lack of updated bilingual dictionaries hinders accuracy, efficiency, and professionalism in specialized translation. It underscores the need for ongoing resource development to keep pace with the rapid evolution of specialized fields.
Hello, this is ali. I wanted to share with you the excellent academic English editing and translation services offered by AJE. You can receive $80 off your first purchase by visiting here:
#English editing #editing
How does balancing faithfulness and creativity in translation influence the translator's connection with the target audience?
How does the balance between maintaining fidelity and faithfulness to the source text and taking creative liberties in translation influence the translator's capacity to convey the original message, cultural differences, and emotional impact, while also ensuring that the target audience can fully comprehend the translated work?
I am writing a research paper titled "AI Enhancing Translation On Social Media Marketing?
Hi everyone!
As a beginner, I have a question regarding uploading sequences to BOLD (and GeneBank). I have obtained several raw plant sequences using rbcL, matK, trnL and trnH plastid primers. As I know, should delete stop codons before uploading the sequences anywhere. However, there are several translations that I can use. If I use Code 11 (Bacterial, Archeal, and Plant Plastid) with 'ATG or alternative initiation codons' in Orffinder, I receive the final sequence ca. 450 bp. However, when I upload it to BOLD, it says that the stop codon is detected and automatically identifies code 1 (Standard) as a translation matrix. Whether I apply code 1 in Orffinder with 'ATG or alternative initiation codons', I receive ca. 250 bp resulting sequence, which is almost twice shorter compared to code 11. But in such a case, no stop codons are detected. As for me, it looks more correct to apply code 11, but I cannot understand why it results in an error in BOLD (and probably will result in an error if I will try to upload to GeneBank). What am I doing wrong?
International Translation Day 2024
We celebrate the invaluable work of translators, interpreters, and language professionals who break down language barriers and foster understanding across cultures. International Translation Day is a reminder of how translation promotes unity, global collaboration, and cultural exchange in our increasingly interconnected world.
This year's theme, “Translation: Bridging Cultures, Connecting Worlds,” highlights the critical role of translators in diplomacy, education, business, and humanitarian work. Let’s take a moment to appreciate the dedication and expertise of those who make communication possible across languages.
#InternationalTranslationDay #LanguageProfessionals #CulturalExchange #GlobalUnderstanding #TranslationMatters #BridgingCultures #MultilingualCommunication
The Sydney School versus Berkeley...
According to Newton's third law, for there to be a balance of forces, for every action there must be an equal reaction.
Here's the strange thing about engineers.
The action of a large earthquake is three to four times the weight-reaction of the building, and they expect there to be an equilibrium of forces without bolting the structure to the ground. I offer them an extra force coming from the ground to balance the seismic action and they are still wondering if they want it!
Hi, I'm trying to synthatize gRNA (using T7 RNA Polymerase supplied by biolabs) from a template designed accordingly. Unfortunatly, after a purification step, the yield is not that good (between 4-25ng/µl). Do you have any solution/tips to improve the efficiency of the translation?
Thank you!
Dead languages are potentially easier to automate because they are stagnant thus have both permanent vocabulary and grammar.
Is it appropriate to place the ATG codon in front of the gene of interest, since there is a secretion signal that has its own ATG in front of this gene? I need my protein of interest to be secreted into the medium, so I used a vector with an alpha factor. If I clone the gene of interest as a gene for alpha factor with ATG, is it possible that Pichia pastoris will recognize 2 reading frames and the protein of interest can be produced intracellularly? Or is it better to clone the gene of interest without its own ATG so that I can be sure that the yeast will be read an alpha factor and the gene of interest as one reading frame and the protein will be secreted into the medium? Thanks in advance for the answers
When considered the natural environment and the habitus of the translators, it can be seen that in the end of the day they are under the absolute and inevitable dominance of the cultural norms that are prevailing the place and environment they live in as translators. Also they are born in there. The way of attributing meanings, conceiving the "other" and world also the methodology of doing so is shaped within this culture. Even if they are not deeply affected by these prevailing norms of that culture they live in, saying that they are completely far from these affects is not reflecting trues. Consequently, the translator interprets the "source" one under the affect of the "target" and then translates it into the target the way how the prevailing norms of that culture he/she lives in presents the source one. At least the affects of the target can be seen on the translator when he/she translates she/he cannot run away from this reality and truth. So, I think that it is debatable the percentage of the absoluteness of translating the source culture's text into target culture's text. So the traditional way how it taught to the translators to select their type of translating (source/target oriented) before the translating process should be reevaluated and revised thoroughly. The background of these problems should be more visibly argued and debated among lecturers. Hope I could reflected my views here in a true way. And hope it would be beneficial.
I am searching the complete English Translation of :Al-Fasl" by Ibn-e-Hazm. Please guide me. Thanks
According to a 1907 reproduction, the 1842 article was published in: Abhandlungen der Koniglichen Bohmischen Gesellschaft der Wissenschaften.
The 1907 reproduction, apparently under the auspices of H. A. Lorentz, can be found at: https://www.deutsche-digitale-bibliothek.de/item/PX6IXQLYSSVGGDQSZHMHONP5KWBS4FWL
An English translation of Doppler's 1842 monograph can be found in the book by Alec Eden, "The search for Christian Doppler"
Is such an important article only available as a reproduction?
Hi,
As part of a miltilevel study examining the impact of steroid toxicity in patients with different rheumatic diseases (see here: https://vasup.ndorms.ox.ac.uk/) we collected data from the UK and Portugal. We have money to pay for a translation agency but we also think the use of Google translation and then cross-validation of the Portuguese and English transcripts from participants.
We'd appreciate any previous experiences.
Skopos theory, developed by Hans Vermeer, is a functionalist approach to translation that emphasizes the purpose (Skopos) of a translation as the primary factor guiding the translation process. According to this theory, the translator's decisions should be driven by the intended function of the translation in the target culture.
How can cognitive approaches help in translating religious texts?
1)Identify concrete situation.
2)Have empathy.
3)Either already know the language or have an effective enough AI translator.
Translate text with your camera
Hello to all dear professors and researchers. I did the electrophoresis part well in the western blot setup phase. But now I have a problem with the same raw materials and the time of electrophoresis to separate the bands is very long. What do you think is the cause and what are the solutions? Thank you very much.
Dear Researchers,
I am reaching out to you for your invaluable expertise in designing a questionnaire for an upcoming research project
My research focuses on exploring the strategies utilised by translation teachers in addressing errors within the classroom setting, as well as their perceptions regarding the efficacy of error correction techniques. Given the complexity and significance of this topic, I believe that your insights and guidance would greatly enhance the quality and depth of my investigation.
Your experience and expertise in translation didactics research would be invaluable in shaping the structure and content of the questionnaire, ensuring that it elicits meaningful responses and provides valuable insights into the research questions at hand.
Thank you for considering my request.
Warm regards,
Good evening,
My IPA (Interpretative Phenomenological Analysis) research project involves conducting interviews in a different country and in a different language. Considering that I will need to account for cultural contexts and that the language may carry cultural meanings, would it be appropriate consider Poblete's (2009) five operations of translation in the Methods of Analysis section? Additionally, are there any translation tools available that could expedite the translation process?
Thank you.
Dear Researchers,
I am reaching out to you for your invaluable expertise in designing a questionnaire for an upcoming research project
My research focuses on exploring the strategies utilised by translation teachers in addressing errors within the classroom setting, as well as their perceptions regarding the efficacy of error correction techniques. Given the complexity and significance of this topic, I believe that your insights and guidance would greatly enhance the quality and depth of my investigation.
Your experience and expertise in translation didactics research would be invaluable in shaping the structure and content of the questionnaire, ensuring that it elicits meaningful responses and provides valuable insights into the research questions at hand.
Thank you for considering my request.
Warm regards,
I have been through the web pages of RSC, ACS, and Elsevier, looking for information about the use of AI for translation, but I didn't find an answer. Is it allowed to use AI for translation? Once the data discussion and conclusions has been written in a mother language, how ethic (and permitted) is to use the AI to translate to English?
In the light of the overlap between Literary Theory and Translation, having for their object an interpretation for each literary work, I would like to discuss the following points with you.
Does the interplay between Literary Theory and Translation Pedagogy matter?
How can we strike the balance between the text objectivity and the translator's subjectivity?
How should Literary Theory subtly influence Literary Translation Methodology and its professionalism?
The inscription reads:
रस से रस
मति माया मति रस
कर तोरे अस सका १५ रोता ४ स २००४ स
My phonetic (possibly incorrect) translation of Sanskrit into English is:
rasa se rasa
mati maya mati rasa
kara tore asa saka 15 rota 4 sa 2004 sa
The last line ending seems to be a date in the Buddhist calendar 2004 - 543 = 1461 CE:
15th rotating ? day of 4th month samvat era in year 2004 samvat era
For those of you who do not know the term, Gong'an literature is Chinese proto-crime fiction, often featuring the characters Judge Dee or Judge Bao. The Chinese stories are all public domain, of course. They are very old. But the translations are not. My question is this: does anyone know of a PD translation of a Judge Dee or Judge Bao story?
Translation of researchers from research gate to google schooler.
I'ma writing to you in relation to the following publication:
Gottlieb, Henrik (2022) La semiótica y la traducción, Hermēneus. Revista de Traducción e Interpretación, 24 (2022), pp. 643-675. doi:10.24197/her.24.2022.643-675 [traducido del inglés por Laura Gata González y Anna Kuźnik]
It is a translation from English into Spanish by my student, Laura Gata González, and myself.
The first surname of Laura Gata González, i.e. "Gata" had been badly introduced by the editor of the scientific review, Hermeneus (afterwards it was corrected though), to the doi system and Crossref, actually as "Gato" (and not "Gata"). In consequence, it went to researchgate with its wrong spelling "Gato". How can I correct this?
Please, give me some indications on this.
I have already corrected the wrong spelling of my own surname, but I am not able to do the same with other authors' data.
With my best wishes,
Anna

I recently encountered an intriguing situation while examining a plasmid constructed by someone else for a eukaryotic expression system. This plasmid contains a unique arrangement of open reading frames (ORFs) that has sparked several questions regarding the potential outcomes of their translation.
In this plasmid, there is an ORF near the 5' end, where the translation initiation site is quickly followed by a stop codon, potentially resulting in a very short peptide. More interestingly, nested within this first ORF is a second ORF that begins inside the first ORF and could potentially translate into a much longer protein, consisting of 500 amino acids.
Given the common understanding that eukaryotic transcripts typically feature a single ORF, the discovery of this arrangement has led me to ponder the following questions about the translational dynamics in this specific scenario:
- In the context of this plasmid, will the translation machinery be capable of bypassing the short ORF to translate the longer protein, or will it prioritize the translation of the short peptide due to its proximity to the 5' end?
- If both peptides are indeed translated, what might be the expected ratio between the production of the long and short peptides?
- Is there a possibility that only the short peptide will be translated, effectively ignoring the translation potential of the longer, nested ORF?
Furthermore, I'm curious about how this scenario might differ if the plasmid were used in a prokaryotic system, which is known for its ability to translate multiple ORFs within a single transcript.
I'm seeking insights, experiences, or any relevant literature that could help shed light on the translational strategies employed by cells when faced with plasmids containing nested ORFs, especially in the context of eukaryotic expression systems.
Thank you in advance for sharing your knowledge and experiences.
Errors in translation didactics related to teaching strategies can be utilized as valuable learning opportunities. By analyzing these errors, educators can identify areas of weakness in their teaching methods and curriculum design
Hi! I'm new in the field of Fluorescent in Situ Hybridization (FISH).
I want to use DNA FISH to visualize a small region in the human genome (around 1kb). I'm not sure if this is too short to use probes generated by nick translation.
I guess I probably need to order a set of short probes that all anneal to this region to enhance my signal. Does the Stellaris® RNA FISH system from Biosearch Tech (https://www.biosearchtech.com/products/rna-fish) apply to my case?
Or do you think this experiment is doable? What is the best way to do it?
Thanks a lot!
Can I upload to ResearchGate an English translation of my book or article which was originally published in another language?
Hi everyone!
I'm struggling to find the correct English translation for "surclones".
For example, you can obtain these "surclones" by streaking an [ADE-] strain on an adenin-depleted media : the majority won't grow but you can see a few clones appear due for exemple to the reversion of a mutation. So how do you call these few clones ?
Thank you all!
In chapter 35 in Don Quijote, Cervantes used a scene from "The golden ass" (unfortunate translation) from Apuleius. The rare version Cervantes did read in catholic Italy was a censored version. As he later read the original version in the king of Algiers's library, he thought his copying would never be spotted. By the way, what was the manchego slave doing in King of Algiers's library?
(9) (PDF) Miguel de Cervantes, slave, and his master Hassan Pacha Veneziano (researchgate.net)
Hello everyone!
I have the question related to preparing of crystal structure for atomistic simulation.
I need 3*3*3 translated unit cell that opposite faces complement each other in.
I use experimental structure of crystal for starting. But after unit cell translation I have got big cell that opposite faces superimpose on one another and not complement each other (yellow border).
So I need to delete excessive atoms. But I don't know how I can check that my "manual cutting" is perfect? (i.e. the best way to do it) I'm afraid that my eyes can deceive me.
May be you have such experience and know how to make some additional objective examination?
Thank you for any answer!

Zhuangzi lived around the 4th century BCE during the Warring States period. His work, also titled "Zhuangzi," is a foundational text of Daoism (Taoism) and is known for its philosophical depth, humor, and literary style.
Daoist Philosophy
The Dao (or Tao) is a central concept in Chinese philosophy, particularly in Daoism (Taoism). It's a fundamental idea that underlies the nature of reality, existence, and the way one should live. The term "Dao" itself translates to "the Way" or "the Path." Here are key aspects of the Dao:
- Unnameable and Ineffable: The Dao is often described as unnameable and ineffable. It transcends human language and understanding. In the classic Daoist text, the "Dao De Jing" attributed to Laozi, it is said, "The Dao that can be told is not the eternal Dao; the name that can be named is not the eternal name."
- Unity and Oneness: The Dao represents the underlying unity and oneness of the universe. It is the source and essence of all things, connecting everything in existence. Daoism emphasizes the interconnectedness of all phenomena.
- Natural Order: The Dao is associated with the natural order of the universe. It is the way things naturally are, beyond human attempts to impose artificial structures. Living in harmony with the Dao involves aligning oneself with the natural course of events.
- Wu Wei (無為) - Non-Action or Effortless Action: Daoism advocates the principle of Wu Wei, which is often translated as "non-action" or "effortless action." It doesn't mean complete inactivity but rather acting in accordance with the natural flow of the Dao, without unnecessary interference or resistance.
- Balance and Harmony: The Dao emphasizes balance and harmony. It is neither extreme nor excessive. Living in accordance with the Dao involves finding a middle way, recognizing the interplay of opposites, and avoiding extremes.
- Spontaneity and Simplicity: The Dao is spontaneous and simple. It operates without deliberate planning or artificial complexity. Daoist philosophy encourages a return to simplicity and a natural way of being.
- Eternal and Ever-Changing: The Dao is considered eternal and ever-changing. It is a paradoxical concept that transcends time and yet is in constant flux. It is both timeless and continuously evolving.
- Intuitive Understanding: Daoist wisdom is often characterized by an intuitive understanding of the Dao. It is not necessarily something that can be grasped through intellectual analysis but is recognized through direct experience and insight.
- Transcending Dualities: The Dao transcends dualities such as good and bad, beautiful and ugly, success and failure. It encompasses the totality of existence, recognizing the relativity and interconnectedness of opposites.
The Challenges of Interpreting & Translating Zhuangzi
Interpreting Zhuangzi poses several challenges, and the limits of translation play a crucial role in this process. Here are some aspects to consider:
- Cultural and Linguistic Differences: Zhuangzi's ideas are deeply rooted in the Chinese language and cultural context of his time. Translating these ideas into another language, especially one with different philosophical traditions, can lead to misunderstandings or loss of nuance.
- Conceptual Nuances: Certain Chinese philosophical concepts may not have direct equivalents in other languages. Translators often face challenges in conveying the subtle nuances of Zhuangzi's thought, such as the Dao (Tao), which encompasses the idea of the Way or the natural order.
- Ambiguity and Paradox: Zhuangzi is known for his use of paradox and ambiguity. Translating such literary and philosophical devices can be challenging because the meaning may shift or become less apparent in another language. Maintaining the richness of his language is a formidable task.
- Cultural References and Allusions: Zhuangzi often used anecdotes, allegories, and historical references that may be unfamiliar to readers from different cultural backgrounds. Translators need to decide how much contextual information to provide without overwhelming the reader.
- Poetic and Literary Style: Zhuangzi's writing is characterized by a poetic and literary style. The beauty and artistry of his prose may be difficult to capture fully in translation. The rhythm, wordplay, and rhetorical devices may not carry over seamlessly.
- Interpretation of Daoism: Daoism, as presented by Zhuangzi, involves a way of thinking and living that may be unfamiliar to Western philosophical traditions. Translators must carefully choose words and concepts that convey the essence of Daoism without imposing foreign philosophical frameworks.
- Different Editions and Manuscripts: The Zhuangzi has different editions and manuscripts, which can vary in content and arrangement. Translators may need to make choices about which version to use and how to reconcile differences.
Given these challenges, scholars and translators often provide extensive commentary and annotations alongside translations to offer readers a deeper understanding of Zhuangzi's text. Multiple translations by different scholars can also be valuable for gaining a more comprehensive view of Zhuangzi's ideas, as each translator may emphasize different aspects based on their interpretatio
On Stillness and Adaptability:
聖人之靜也非以不動為靜,寂然和之。
Shèng rén zhī jìng yě fēi yǐ bù dòng wéi jìng, jì rán hé zhī.
Translation: "The stillness of the sage is not attained by immobility; it is achieved through tranquil harmony."
聖人之樂水也,聖人之樂山也;聖人之動也,聖人之靜也。
Shèng rén zhī lè shuǐ yě, shèng rén zhī lè shān yě; shèng rén zhī dòng yě, shèng rén zhī jìng yě.
Translation: "The sage finds joy in water, the sage finds joy in mountains; the sage's movement is joyful, the sage's stillness is tranquil."
On Trained Spontaneity and Agile Decision-Making:
射猛於飛鏑者,禪讀之人也。鏑心見於物而不見於己,已物與己反而不知不知之知。
Shè měng yú fēi zhú zhě, chán dú zhī rén yě. Zhú xīn jiàn yú wù ér bù jiàn yú jǐ, yǐ wù yǔ jǐ fǎn ér bù zhī bù zhī zhī.
Translation: "The archer who shoots fiercely with flying arrows is a person of Zen reading. The arrow's heart is seen in the target but not in oneself, understanding is turned toward the object and oneself, not knowing this knowing."
耳任聲以聞,眼任色以視,心任意以思,體任勞以行。"
Pinyin: "Ěr rèn shēng yǐ wén, yǎn rèn sè yǐ shì, xīn rèn yì yǐ sī, tǐ rèn láo yǐ xíng."
Translation: "The ears are open to sound, the eyes are open to color, the mind is open to thought, the body is open to labor."
On Wu Wei and Effortless Action:
道不可道,名不可名。道名始離
Dào bù kě dào, míng bù kě míng. Dào míng shǐ lí.
Translation: "The Dao that can be told is not the eternal Dao; the name that can be named is not the eternal name."
行無行,名無名。事無事,名無名
Xíng wú xíng, míng wú míng. Shì wú shì, míng wú míng.
Translation: "The way that can be walked is not the eternal way; the name that can be named is not the eternal name."
On Technology & Meaning:
魚罾之乎者也,莫之以其魚;麗兔之乎者也,莫之以其兔;白燕之乎者也,莫之以其燕。言之隨也,莫之以其義;故曰,失之者,可勿捨乎?
Yú zēng zhī hū zhě yě, mò zhī yǐ qí yú; lì tù zhī hū zhě yě, mò zhī yǐ qí tù; bái yàn zhī hū zhě yě, mò zhī yǐ qí yàn. Yán zhī suí yě, mò zhī yǐ qí yì; gù yuē, shī zhī zhě, kě wù shě hū?
Translation: "The fish trap exists because of the fish; once you've gotten the fish, you can forget the trap. The rabbit snare exists because of the rabbit; once you've gotten the rabbit, you can forget the snare. Words exist because of meaning; once you've gotten the meaning, you can forget the words."
Since there are so many layers to interpreting Chinese, I will try to look at the meaning of each character used in Zhuangzi´s writings. Classical Chinese, in the way I was taught, is a reading of character by character.
For my PhD I am working in SE Spain with a community-led initiative that emerged as the social response to wildfires in the region. Now, they call themselves a 'Plataforma en Defensa del Territorio' and I am struggling with translating this concept to English. There doesn't seem to be a literal translation, although I can't imagine that the English-speaking world does not have similar citizen initiatives.
Any translation suggestions are most welcome! Thanks
Many clinical trialists integrate qualitative and/or mixed methods research as part of their clinical trial projects. Could you please share your experiences and thoughts on the challenges in integrating these methodologies in clinical trials, and how to address them.
Highly Open-ended question: How much of a language must a translator know for AI to do the rest of the translation?
What factors affect machine translation (MT) quality? I’m looking for human, scientific (published research), state-of-the-art, specific reflections, not AI-generated, impressionistic, older, general discussions.
I often hear about the quantity of resources being the crux of the issue. However, my hunch is that language pair, and more precisely language combination (directionality), is also an influencing factor. Say you're translating from Japanese (high-context language) into French (low-context language). In Japanese, you don't need to specify gender, number, etc. In French, you need that information, which means you'll have to make a guess (and take a chance), perform external research, ask the client, etc., but anyway, you probably won't find the answer within the source text (ST). Arguably, a MT system cannot make good decisions in that sort of context. Whereas, if you translate from Spanish into French, most of the information you need for the French target text (TT) can be retrieved directly from the Spanish ST.
When I researched the question in 2017-2018, it was clear from the literature that linguistic distance was a relevant factor in MT quality. For example: "Machine translation (MT) between (closely) related languages is a specific field in the domain of MT which has attracted the attention of several research teams. Nevertheless, it has not attracted as much attention as MT between distant languages. This is, on the one side, due to the fact that speakers of these languages often easily understand each other without switching to the foreign language. […] Another fact is that MT between related languages is less problematic than between distant languages…" (Popović, Arčan & Klubička, 2016, p. 43).
But what now in 2023, soon 2024, with LLMs and recent improvements on NMT? Thank you!
The advancement of machine translation (MT), commonly known as the mechanization of translation, has become a subject of considerable academic interest in recent years. MT systems have made remarkable progress, primarily due to the application of artificial intelligence and neural network technologies like neural machine translation (NMT). While these advancements have undoubtedly made translation more accessible and efficient, they have also given rise to several academic concerns.
One pivotal concern revolves around the quality of machine translations. Despite notable improvements, MT systems often struggle to match the nuanced and context-dependent nature of human translation. Ambiguities, cultural nuances, and idiomatic expressions pose significant challenges for MT systems in delivering accurate results. Another substantial concern centers on the potential impact on the translation profession itself.
There is apprehension about the displacement of human translators and the potential devaluation of their expertise as MT systems become more prevalent. Ethical considerations also come into play, particularly concerning the possibility of biased or offensive translations. MT systems may inadvertently perpetuate stereotypes or prejudices present in their training data.
Consequently, the academic community remains actively engaged in exploring these issues, with a shared goal of enhancing MT quality, addressing ethical dilemmas, and gaining a deeper understanding of the intricate relationship between humans and machines in the field of translation.
There's an increasing number of studies on the use of qualitative and mixed methods research in clinical trials but process of translating evidence from clinical trials to practice and policy remains problematic. Just wondering how qualitative and mixed methods could be used more effectively to facilitate translation.
How to obtain the permission from original author for translation of scale?
what is the best teaching strategy used to invest translation errors in translation didactics?
how can we as translation teachers use students' errors ?
In this era, AI technology is rapidly growing. There are some claims that AI may be able to produce human-like translations.
The One-hundred Yue, or Wu and Yue, in southern Jiangsu
and northern Zhejiang province respectively, is one representative Ethnic Culture and Regional Culture in China. I want to know the research of Wu and Yue Culture in English world, and also the translation of the classics of Ancient Yue.
I would like to enquire on the cross cultural study back to back transition, if the original questionnaire is in English language.
I would like to translate into Malay. I did refer to Brislin but I would like more clarification from the experts.
Can any experts help me in this please:)
I have used a 5-point likert scale for my research on 'catalysing spiritual transformation'. The scale has 20 items divided equally across 4 domains (factors). It is a dual response scale and the first response rates the goal while the second response rates the accomplishment. It is a proven scale which has been validated for content and construct across continents. However, since I am using a translation for the first time, the author of the instrument who approved my translation, suggested that it is proper to do a fresh 'construct' validation for the translation. Accordingly I prepared to do CFA and found that my sample size after joining pretest and posttest data was only 174. I would like to join the two sets of responses of each questionnaire and double the sample size to 348, considering the fact that both sets of responses have identical structures though with different foci. I also noticed from the correlations matrices for the two sets of responses and the combination, that the correlation coefficients are significantly better for the combination and are all positive and > 0.5. Will it be scientifically sound to join the data of the two sets of responses and double my sample size as above?
Look forward to your valuable thoughts.
Thankfully
Lawrence F Vincent
Genetic code expansion (GCE) technology usually uses stop codon (e.g., TAG) to incorporate non-canonical amio acids into a protein. However, this might influence the expression of endogenous proteins by disturbing the translation terminiation.
Is there any research on this side effect of GCE? How would GCE influence the cellular homestasis and phenotype?
which is the best tool for an academic work translation chatGPT or google translater?
I'm far from being a Freud expert, but, my feeling is the 1950 translation by J. Strachey is in many ways outdated, when not misleading (starting form the very title of the essay, the awful choice of the 'cathexis' term for Besetzung, etc.). Are there other translations around? Any projects for a fresh translation? Thank you for pointing me to any available resources / information.
Hi!
I am trying to publish a translated instrument but journals review with the following issues:
-not enough novelty, specific language or sample not adequate
-not "high-quality though all psychometric protocol followed.
The codon AGG normally codes for argine but in altered translation it codes for stop. Where does it occur?
transcription and translation of eukaryotic and prokaryotic cells
Are any alternative methods that can be implemented to avoid a Page walk into TLB, whenever there is a Page Fault in physical and Virtual Memory Mapping??
Where TLB is Translation Lookaside Buffer.
In my experience co-operating in a 'Lean' environment is much easier/smoother and more effective/efficient when people speak the same language. For that reason I love the 'Lean lexicon' (general book). Here and now I am in search of comparable cases where a lexicon did wonders. Just hope that my question comes across...
Thanks in advance,
Paul
Hello,
I am designing a plasmid with an SV40 promoter-driven antibiotic resistance. Does expression from an SV40 promoter require a TATA box upstream of the transcription start site? The original vector had a TATA box at -30, however this is lost in my cloning strategy. With my current plan, the transcription start site is just 8bp from the end of the SV40 promoter. Will this allow for expression, or is a TATA box needed?
Thanks!
Can anyone recommend a software that could be used to help in Arabic interviews transcription/ translation? I am currently using Trint, but unfortunately it is not accurate.
How can posters be analyzed in qualitative research? What is the best approach to analyzing posters, and is thematic analysis a suitable method for this type of analysis? Can you recommend any interesting literature on analyzing images and posters in qualitative research?
Could this be considered a valid translation/back translation procedure if you translated a questionnaire from e.g. English to Chinese using one machine tranlsation engine and then back translated the obtained questionnare from Chinese to English using other machine translation engine; and resulted English version looks okay when compared with initial one?
Google is consistently at the head of the pack when it comes to A.I. and algorithm-based learning, and Translate's no exception. The program generates translations using patterns found in huge amounts of text, discovered through millions of documents that have already been translated by humans. As time goes on, the program recognizes more and more patterns, receives input from real people, and continues to refine its translations.
In September, Google switched from Phrase-Based Machine Translation (PBMT) to Google Neural Machine Translation (GNMT) for handling translations between Chinese and English. The Chinese and English language pair has historically been difficult for machines to translate, and Google managed to get its system close to human levels of translation by using bilingual people to train the system ... Google planned to add GNMT for all 103 languages in Google Translate. That would mean feeding in data for 103^2 language pairs, and the artificial intelligence would have to handle 10,609 models.
Google tackled this problem by allowing a single system to translate between multiple languages
I am investigating the translation of nonverbal elements of epic-novel "Path of Abay". And, I want to study its cultural aspect and its role in communication. I hope that you can help me in any way.
Is human preproinsulin molecule a good example for studying how amino acid sequences are configured according to their numbering proposed in the article "Numbering of the twenty proteinogenic amino acids"?
The amino acid sequence of the 110-amino acid preproinsulin, the initial product of the translation of insulin mRNA, is in close dependence with the numbering of the twenty proteinogenic amino acids.
Previously, I used Frank formula D=b/2sinA to determine. For example, D=a*(z1*z1+z2*z2+z3*z3)/sqrt(z1*z1+z2*z2+z3*z3), where z1-z3 is the crystal orientation along GB normal z. This is successful for [001], but does not work for [011] and [111]. Now I have to use the box size to rigid body translation, please help.
We often see applications for translating texts, but they do not fulfill the purpose. I want academic accredited programs in language development and translation
I am working with a questionnaire that has ten questions and the answers are closed type. Should such a questionnaire be validated, even though the answers to the questions are not added together and no aggregate value is obtained, as is the case with the quality of life questionnaire? The questionnaire has already been officially translated from the original language into a number of other languages. That is why there is a certain amount of confusion as to whether such a questionnaire needs to be validated or does not require validation after translation.