Two Intermediate Translations Are Better Than One: Fine-tuning LLMs for Document-level Translation Refinement
By: Yichen Dong , Xinglin Lyu , Junhui Li and more
Potential Business Impact:
Makes translated documents sound more natural.
Recent research has shown that large language models (LLMs) can enhance translation quality through self-refinement. In this paper, we build on this idea by extending the refinement from sentence-level to document-level translation, specifically focusing on document-to-document (Doc2Doc) translation refinement. Since sentence-to-sentence (Sent2Sent) and Doc2Doc translation address different aspects of the translation process, we propose fine-tuning LLMs for translation refinement using two intermediate translations, combining the strengths of both Sent2Sent and Doc2Doc. Additionally, recognizing that the quality of intermediate translations varies, we introduce an enhanced fine-tuning method with quality awareness that assigns lower weights to easier translations and higher weights to more difficult ones, enabling the model to focus on challenging translation cases. Experimental results across ten translation tasks with LLaMA-3-8B-Instruct and Mistral-Nemo-Instruct demonstrate the effectiveness of our approach.
Similar Papers
Multilingual Contextualization of Large Language Models for Document-Level Machine Translation
Computation and Language
Translates whole books, not just sentences.
Improving LLM-based Document-level Machine Translation with Multi-Knowledge Fusion
Computation and Language
Improves computer translation by using summaries and key words.
Beyond the Sentence: A Survey on Context-Aware Machine Translation with Large Language Models
Computation and Language
Makes computer translations understand more context.