Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost
By: Runzhe Zhan , Zhihong Huang , Xinyi Yang and more
Potential Business Impact:
Makes computers better at judging translated words.
Recent advancements in large reasoning models (LRMs) have introduced an intermediate "thinking" process prior to generating final answers, improving their reasoning capabilities on complex downstream tasks. However, the potential of LRMs as evaluators for machine translation (MT) quality remains underexplored. We provides the first systematic analysis of LRM-as-a-judge in MT evaluation. We identify key challenges, revealing LRMs require tailored evaluation materials, tend to "overthink" simpler instances and have issues with scoring mechanisms leading to overestimation. To address these, we propose to calibrate LRM thinking by training them on synthetic, human-like thinking trajectories. Our experiments on WMT24 Metrics benchmarks demonstrate that this approach largely reduces thinking budgets by ~35x while concurrently improving evaluation performance across different LRM scales from 7B to 32B (e.g., R1-Distill-Qwen-7B achieves a +8.7 correlation point improvement). These findings highlight the potential of efficiently calibrated LRMs to advance fine-grained automatic MT evaluation.
Similar Papers
Reasoning Models Reason Well, Until They Don't
Artificial Intelligence
Makes smart computers better at solving hard problems.
JudgeLRM: Large Reasoning Models as a Judge
Computation and Language
Makes AI judges better at thinking hard problems.
LLM Reasoning for Machine Translation: Synthetic Data Generation over Thinking Tokens
Computation and Language
Makes computer translators better by showing them how.