LaoBench: A Large-Scale Multidimensional Lao Benchmark for Large Language Models
By: Jian Gao , Richeng Xuan , Zhaolu Kang and more
Potential Business Impact:
Tests AI's understanding of the Lao language.
The rapid advancement of large language models (LLMs) has not been matched by their evaluation in low-resource languages, especially Southeast Asian languages like Lao. To fill this gap, we introduce LaoBench, the first large-scale, high-quality, and multidimensional benchmark dataset dedicated to assessing LLMs' comprehensive language understanding and reasoning abilities in Lao. LaoBench comprises over 17,000 carefully curated samples spanning three core dimensions: knowledge application, K12 foundational education, and bilingual translation among Lao, Chinese, and English. The dataset is divided into open-source and closed-source subsets, with the closed-source portion enabling black-box evaluation on an official platform to ensure fairness and data security. Our data construction pipeline integrates expert human curation with automated agent-assisted verification, ensuring linguistic accuracy, cultural relevance, and educational value. Benchmarking multiple state-of-the-art LLMs on LaoBench reveals that current models still face significant challenges in mastering Lao across diverse tasks. We hope LaoBench will catalyze further research and development of AI technologies for underrepresented Southeast Asian languages.
Similar Papers
CogBench: A Large Language Model Benchmark for Multilingual Speech-Based Cognitive Impairment Assessment
Artificial Intelligence
Helps computers find memory problems from talking.
OmniEduBench: A Comprehensive Chinese Benchmark for Evaluating Large Language Models in Education
Computation and Language
Tests how well AI learns and thinks like students.
AraLingBench A Human-Annotated Benchmark for Evaluating Arabic Linguistic Capabilities of Large Language Models
Computation and Language
Tests if computer language skills are real.