Score: 0

Languages Still Left Behind: Toward a Better Multilingual Machine Translation Benchmark

Published: August 28, 2025 | arXiv ID: 2508.20511v1

By: Chihiro Taguchi , Seng Mai , Keita Kurabe and more

Potential Business Impact:

Makes language translators more accurate for everyone.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Multilingual machine translation (MT) benchmarks play a central role in evaluating the capabilities of modern MT systems. Among them, the FLORES+ benchmark is widely used, offering English-to-many translation data for over 200 languages, curated with strict quality control protocols. However, we study data in four languages (Asante Twi, Japanese, Jinghpaw, and South Azerbaijani) and uncover critical shortcomings in the benchmark's suitability for truly multilingual evaluation. Human assessments reveal that many translations fall below the claimed 90% quality standard, and the annotators report that source sentences are often too domain-specific and culturally biased toward the English-speaking world. We further demonstrate that simple heuristics, such as copying named entities, can yield non-trivial BLEU scores, suggesting vulnerabilities in the evaluation protocol. Notably, we show that MT models trained on high-quality, naturalistic data perform poorly on FLORES+ while achieving significant gains on our domain-relevant evaluation set. Based on these findings, we advocate for multilingual MT benchmarks that use domain-general and culturally neutral source texts rely less on named entities, in order to better reflect real-world translation challenges.

Country of Origin
🇺🇸 United States

Page Count
13 pages

Category
Computer Science:
Computation and Language