BnMMLU: Measuring Massive Multitask Language Understanding in Bengali
By: Saman Sarker Joy
Potential Business Impact:
Tests how well computers understand Bengali.
The Massive Multitask Language Understanding (MMLU) benchmark has been widely used to evaluate language models across various domains. However, existing MMLU datasets primarily focus on high-resource languages such as English, which leaves low-resource languages like Bengali underrepresented. In this paper, we introduce BnMMLU, a benchmark to evaluate the multitask language understanding capabilities of Bengali in language models. The dataset spans 23 domains, including science, humanities, mathematics and general knowledge and is structured in a multiple-choice format to assess factual knowledge, application-based problem-solving and reasoning abilities of language models. It consists of 138,949 question-option pairs. We benchmark several proprietary and open-source large language models (LLMs) on the BnMMLU test set. Additionally, we annotate the test set with three cognitive categories-factual knowledge, procedural application and reasoning-to gain deeper insights into model strengths and weaknesses across various cognitive tasks. The results reveal significant performance gaps, highlighting the need for improved pre-training and fine-tuning strategies tailored to Bengali data. We release the dataset and benchmark results to facilitate further research in this area.
Similar Papers
SinhalaMMLU: A Comprehensive Benchmark for Evaluating Multitask Language Understanding in Sinhala
Computation and Language
Helps computers understand a new language better.
Measuring Hong Kong Massive Multi-Task Language Understanding
Computation and Language
Helps AI understand Hong Kong's language and culture.
IndicMMLU-Pro: Benchmarking Indic Large Language Models on Multi-Task Language Understanding
Computation and Language
Helps computers understand Indian languages better.