Score: 2

LAG-MMLU: Benchmarking Frontier LLM Understanding in Latvian and Giriama

Published: March 14, 2025 | arXiv ID: 2503.11911v2

By: Naome A. Etori , Kevin Lu , Randu Karisa and more

Potential Business Impact:

Tests AI understanding in new languages.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As large language models (LLMs) rapidly advance, evaluating their performance is critical. LLMs are trained on multilingual data, but their reasoning abilities are mainly evaluated using English datasets. Hence, robust evaluation frameworks are needed using high-quality non-English datasets, especially low-resource languages (LRLs). This study evaluates eight state-of-the-art (SOTA) LLMs on Latvian and Giriama using a Massive Multitask Language Understanding (MMLU) subset curated with native speakers for linguistic and cultural relevance. Giriama is benchmarked for the first time. Our evaluation shows that OpenAI's o1 model outperforms others across all languages, scoring 92.8% in English, 88.8% in Latvian, and 70.8% in Giriama on 0-shot tasks. Mistral-large (35.6%) and Llama-70B IT (41%) have weak performance, on both Latvian and Giriama. Our results underscore the need for localized benchmarks and human evaluations in advancing cultural AI contextualization.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Computation and Language