LLMs Are Not Scorers: Rethinking MT Evaluation with Generation-Based Methods
By: Hyang Cui
Potential Business Impact:
Makes computer translations much better.
Recent studies have applied large language models (LLMs) to machine translation quality estimation (MTQE) by prompting models to assign numeric scores. Nonetheless, these direct scoring methods tend to show low segment-level correlation with human judgments. In this paper, we propose a generation-based evaluation paradigm that leverages decoder-only LLMs to produce high-quality references, followed by semantic similarity scoring using sentence embeddings. We conduct the most extensive evaluation to date in MTQE, covering 8 LLMs and 8 language pairs. Empirical results show that our method outperforms both intra-LLM direct scoring baselines and external non-LLM reference-free metrics from MTME. These findings demonstrate the strength of generation-based evaluation and support a shift toward hybrid approaches that combine fluent generation with accurate semantic assessment.
Similar Papers
When LLMs Struggle: Reference-less Translation Evaluation for Low-resource Languages
Computation and Language
Helps computers translate languages better, even rare ones.
Déjà Vu: Multilingual LLM Evaluation through the Lens of Machine Translation Evaluation
Computation and Language
Tests AI language skills better for smarter tools.
MTQ-Eval: Multilingual Text Quality Evaluation for Language Models
Computation and Language
Helps computers judge good writing in many languages.