Score: 2

Mind the Gap... or Not? How Translation Errors and Evaluation Details Skew Multilingual Results

Published: November 7, 2025 | arXiv ID: 2511.05162v1

By: Jan-Thorsten Peter , David Vilar , Tobias Domhan and more

BigTech Affiliations: Google

Potential Business Impact:

Fixes AI math problems for all languages.

Business Areas:
Language Learning Education

Most current large language models (LLMs) support a wide variety of languages in addition to English, including high-resource languages (e.g. German, Chinese, French), as well as low-resource ones (e.g. Swahili, Telugu). In addition they have also shown impressive capabilities in different domains, like coding, science and math. In this short paper, taking math as an example domain, we study the performance of different LLMs across languages. Experimental results show that there exists a non-negligible and consistent gap in the performance of the models across languages. Interestingly, and somewhat against expectations, the gap exists for both high- and low-resource languages. We hope that these results influence further research into cross-lingual capability generalization for next generation LLMs. If it weren't for the fact that they are false! By analyzing one of the standard multilingual math benchmarks (MGSM), we determine that several translation errors are present in the data. Furthermore, the lack of standardized answer extraction from LLM outputs further influences the final results. We propose a method for automatic quality assurance to address the first issue at scale, and give recommendations to address the second one. Combining these two approaches we show that the aforementioned language gap mostly disappears, leading to completely different conclusions from our research. We additionally release the corrected dataset to the community.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Computation and Language