Investigating Test-Time Scaling with Reranking for Machine Translation
By: Shaomu Tan , Ryosuke Mitani , Ritvik Choudhary and more
Potential Business Impact:
Makes computer translations better by trying many options.
Scaling model parameters has become the de facto strategy for improving NLP systems, but it comes with substantial computational costs. Test-Time Scaling (TTS) offers an alternative by allocating more computation at inference: generating multiple candidates and selecting the best. While effective in tasks such as mathematical reasoning, TTS has not been systematically explored for machine translation (MT). In this paper, we present the first systematic study of TTS for MT, investigating a simple but practical best-of-N framework on WMT24 benchmarks. Our experiments cover six high-resource and one low-resource language pairs, five model sizes (3B-72B), and various TTS compute budget (N up to 1024). Our results show that a) For high-resource languages, TTS generally improves translation quality according to multiple neural MT evaluation metrics, and our human evaluation confirms these gains; b) Augmenting smaller models with large $N$ can match or surpass larger models at $N{=}1$ with more compute cost; c) Under fixed compute budgets, larger models are typically more efficient, and TTS can degrade quality due to metric blind spots in low-resource cases.
Similar Papers
Test-Time Scaling of Reasoning Models for Machine Translation
Computation and Language
Makes computer translators better at fixing their own mistakes.
The Art of Scaling Test-Time Compute for Large Language Models
Computation and Language
Makes AI think better by changing how it works.
Revisiting Test-Time Scaling: A Survey and a Diversity-Aware Method for Efficient Reasoning
Computation and Language
Makes smart computer programs think better, faster.