Exploring Parameter-Efficient Fine-Tuning and Backtranslation for the WMT 25 General Translation Task
By: Felipe Fujita, Hideyuki Takada
Potential Business Impact:
Improves Japanese to English translation quality.
In this paper, we explore the effectiveness of combining fine-tuning and backtranslation on a small Japanese corpus for neural machine translation. Starting from a baseline English{\textrightarrow}Japanese model (COMET = 0.460), we first apply backtranslation (BT) using synthetic data generated from monolingual Japanese corpora, yielding a modest increase (COMET = 0.468). Next, we fine-tune (FT) the model on a genuine small parallel dataset drawn from diverse Japanese news and literary corpora, achieving a substantial jump to COMET = 0.589 when using Mistral 7B. Finally, we integrate both backtranslation and fine-tuning{ -- }first augmenting the small dataset with BT generated examples, then adapting via FT{ -- }which further boosts performance to COMET = 0.597. These results demonstrate that, even with limited training data, the synergistic use of backtranslation and targeted fine-tuning on Japanese corpora can significantly enhance translation quality, outperforming each technique in isolation. This approach offers a lightweight yet powerful strategy for improving low-resource language pairs.
Similar Papers
Improving Translation Quality by Selecting Better Data for LLM Fine-Tuning: A Comparative Analysis
Computation and Language
Makes computer translators much smarter with better word choices.
Dynamic Jointly Batch Selection for Data Efficient Machine Translation Fine-Tuning
Computation and Language
Makes computer translations much better and faster.
Data Augmentation With Back translation for Low Resource languages: A case of English and Luganda
Computation and Language
Improves computer translation for rare languages.