LAG-MMLU: Benchmarking Frontier LLM Understanding in Latvian and Giriama
By: Naome A. Etori , Kevin Lu , Randu Karisa and more
Potential Business Impact:
Tests AI understanding in new languages.
As large language models (LLMs) rapidly advance, evaluating their performance is critical. LLMs are trained on multilingual data, but their reasoning abilities are mainly evaluated using English datasets. Hence, robust evaluation frameworks are needed using high-quality non-English datasets, especially low-resource languages (LRLs). This study evaluates eight state-of-the-art (SOTA) LLMs on Latvian and Giriama using a Massive Multitask Language Understanding (MMLU) subset curated with native speakers for linguistic and cultural relevance. Giriama is benchmarked for the first time. Our evaluation shows that OpenAI's o1 model outperforms others across all languages, scoring 92.8% in English, 88.8% in Latvian, and 70.8% in Giriama on 0-shot tasks. Mistral-large (35.6%) and Llama-70B IT (41%) have weak performance, on both Latvian and Giriama. Our results underscore the need for localized benchmarks and human evaluations in advancing cultural AI contextualization.
Similar Papers
SinhalaMMLU: A Comprehensive Benchmark for Evaluating Multitask Language Understanding in Sinhala
Computation and Language
Helps computers understand a new language better.
The 2025 Planning Performance of Frontier Large Language Models
Artificial Intelligence
AI can now plan better, like a smart assistant.
From Phonemes to Meaning: Evaluating Large Language Models on Tamil
Computation and Language
Tests computers on Tamil language understanding.