Test Set Quality in Multilingual LLM Evaluation
By: Kranti Chalamalasetti , Gabriel Bernier-Colborne , Yvan Gauthier and more
Potential Business Impact:
Fixes language AI tests for better results.
Several multilingual benchmark datasets have been developed in a semi-automatic manner in the recent past to measure progress and understand the state-of-the-art in the multilingual capabilities of Large Language Models. However, there is not a lot of attention paid to the quality of the datasets themselves, despite the existence of previous work in identifying errors in even fully human-annotated test sets. In this paper, we manually analyze recent multilingual evaluation sets in two languages - French and Telugu, identifying several errors in the process. We compare the performance difference across several LLMs with the original and revised versions of the datasets and identify large differences (almost 10% in some cases) in both languages). Based on these results, we argue that test sets should not be considered immutable and should be revisited, checked for correctness, and potentially versioned. We end with some recommendations for both the dataset creators as well as consumers on addressing the dataset quality issues.
Similar Papers
Evaluating the Quality of Benchmark Datasets for Low-Resource Languages: A Case Study on Turkish
Computation and Language
Fixes computer language for other cultures.
The Bitter Lesson Learned from 2,000+ Multilingual Benchmarks
Computation and Language
Tests AI fairly in many languages.
MTQ-Eval: Multilingual Text Quality Evaluation for Language Models
Computation and Language
Helps computers judge good writing in many languages.