Score: 2

Evaluating Arabic Large Language Models: A Survey of Benchmarks, Methods, and Gaps

Published: October 15, 2025 | arXiv ID: 2510.13430v2

By: Ahmed Alzubaidi , Shaikha Alsuwaidi , Basma El Amel Boussaha and more

Potential Business Impact:

Helps computers understand Arabic better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This survey provides the first systematic review of Arabic LLM benchmarks, analyzing 40+ evaluation benchmarks across NLP tasks, knowledge domains, cultural understanding, and specialized capabilities. We propose a taxonomy organizing benchmarks into four categories: Knowledge, NLP Tasks, Culture and Dialects, and Target-Specific evaluations. Our analysis reveals significant progress in benchmark diversity while identifying critical gaps: limited temporal evaluation, insufficient multi-turn dialogue assessment, and cultural misalignment in translated datasets. We examine three primary approaches: native collection, translation, and synthetic generation discussing their trade-offs regarding authenticity, scale, and cost. This work serves as a comprehensive reference for Arabic NLP researchers, providing insights into benchmark methodologies, reproducibility standards, and evaluation metrics while offering recommendations for future development.


Page Count
16 pages

Category
Computer Science:
Computation and Language