Same evaluation, more tokens: On the effect of input length for machine translation evaluation using Large Language Models
By: Tobias Domhan, Dawei Zhu
Potential Business Impact:
Helps computers judge long translations better.
Accurately evaluating machine-translated text remains a long-standing challenge, particularly for long documents. Recent work has shown that large language models (LLMs) can serve as reliable and interpretable sentence-level translation evaluators via MQM error span annotations. With modern LLMs supporting larger context windows, a natural question arises: can we feed entire document translations into an LLM for quality assessment? Ideally, evaluation should be invariant to text length, producing consistent error spans regardless of input granularity. However, our analysis shows that text length significantly impacts evaluation: longer texts lead to fewer error spans and reduced system ranking accuracy. To address this limitation, we evaluate several strategies, including granularity-aligned prompting, Focus Sentence Prompting (FSP), and a fine-tuning approach to better align LLMs with the evaluation task. The latter two methods largely mitigate this length bias, making LLMs more reliable for long-form translation evaluation.
Similar Papers
Context Length Alone Hurts LLM Performance Despite Perfect Retrieval
Computation and Language
Makes computers understand long stories better.
Multilingual Contextualization of Large Language Models for Document-Level Machine Translation
Computation and Language
Translates whole books, not just sentences.
Extending Automatic Machine Translation Evaluation to Book-Length Documents
Computation and Language
Tests if computers translate whole books well.