MELAC: Massive Evaluation of Large Language Models with Alignment of Culture in Persian Language
By: Farhan Farsi , Farnaz Aghababaloo , Shahriar Shariati Motlagh and more
Potential Business Impact:
Helps computers understand Persian language and culture better.
As large language models (LLMs) become increasingly embedded in our daily lives, evaluating their quality and reliability across diverse contexts has become essential. While comprehensive benchmarks exist for assessing LLM performance in English, there remains a significant gap in evaluation resources for other languages. Moreover, because most LLMs are trained primarily on data rooted in European and American cultures, they often lack familiarity with non-Western cultural contexts. To address this limitation, our study focuses on the Persian language and Iranian culture. We introduce 19 new evaluation datasets specifically designed to assess LLMs on topics such as Iranian law, Persian grammar, Persian idioms, and university entrance exams. Using these datasets, we benchmarked 41 prominent LLMs, aiming to bridge the existing cultural and linguistic evaluation gap in the field.
Similar Papers
ELAB: Extensive LLM Alignment Benchmark in Persian Language
Computation and Language
Makes AI understand Persian culture and be safe.
Evaluating Arabic Large Language Models: A Survey of Benchmarks, Methods, and Gaps
Computation and Language
Helps computers understand Arabic better.
Evaluating Arabic Large Language Models: A Survey of Benchmarks, Methods, and Gaps
Computation and Language
Tests how well computer programs understand Arabic.