Score: 1

Test Set Quality in Multilingual LLM Evaluation

Published: August 4, 2025 | arXiv ID: 2508.02635v1

By: Kranti Chalamalasetti , Gabriel Bernier-Colborne , Yvan Gauthier and more

Potential Business Impact:

Fixes language AI tests for better results.

Several multilingual benchmark datasets have been developed in a semi-automatic manner in the recent past to measure progress and understand the state-of-the-art in the multilingual capabilities of Large Language Models. However, there is not a lot of attention paid to the quality of the datasets themselves, despite the existence of previous work in identifying errors in even fully human-annotated test sets. In this paper, we manually analyze recent multilingual evaluation sets in two languages - French and Telugu, identifying several errors in the process. We compare the performance difference across several LLMs with the original and revised versions of the datasets and identify large differences (almost 10% in some cases) in both languages). Based on these results, we argue that test sets should not be considered immutable and should be revisited, checked for correctness, and potentially versioned. We end with some recommendations for both the dataset creators as well as consumers on addressing the dataset quality issues.

Country of Origin
🇩🇪 Germany

Page Count
10 pages

Category
Computer Science:
Computation and Language