UrBLiMP: A Benchmark for Evaluating the Linguistic Competence of Large Language Models in Urdu
By: Farah Adeeba , Brian Dillon , Hassan Sajjad and more
Potential Business Impact:
Tests how well computers understand Urdu grammar.
Multilingual Large Language Models (LLMs) have shown remarkable performance across various languages; however, they often include significantly less data for low-resource languages such as Urdu compared to high-resource languages like English. To assess the linguistic knowledge of LLMs in Urdu, we present the Urdu Benchmark of Linguistic Minimal Pairs (UrBLiMP) i.e. pairs of minimally different sentences that contrast in grammatical acceptability. UrBLiMP comprises 5,696 minimal pairs targeting ten core syntactic phenomena, carefully curated using the Urdu Treebank and diverse Urdu text corpora. A human evaluation of UrBLiMP annotations yielded a 96.10% inter-annotator agreement, confirming the reliability of the dataset. We evaluate twenty multilingual LLMs on UrBLiMP, revealing significant variation in performance across linguistic phenomena. While LLaMA-3-70B achieves the highest average accuracy (94.73%), its performance is statistically comparable to other top models such as Gemma-3-27B-PT. These findings highlight both the potential and the limitations of current multilingual LLMs in capturing fine-grained syntactic knowledge in low-resource languages.
Similar Papers
TurBLiMP: A Turkish Benchmark of Linguistic Minimal Pairs
Computation and Language
Tests computer language skills for Turkish.
MultiBLiMP 1.0: A Massively Multilingual Benchmark of Linguistic Minimal Pairs
Computation and Language
Tests computers on understanding many languages.
UrduLLaMA 1.0: Dataset Curation, Preprocessing, and Evaluation in Low-Resource Settings
Computation and Language
Makes computers understand and translate Urdu better.