LAILA: A Large Trait-Based Dataset for Arabic Automated Essay Scoring
By: May Bashendy , Walid Massoud , Sohaila Eltanbouly and more
Automated Essay Scoring (AES) has gained increasing attention in recent years, yet research on Arabic AES remains limited due to the lack of publicly available datasets. To address this, we introduce LAILA, the largest publicly available Arabic AES dataset to date, comprising 7,859 essays annotated with holistic and trait-specific scores on seven dimensions: relevance, organization, vocabulary, style, development, mechanics, and grammar. We detail the dataset design, collection, and annotations, and provide benchmark results using state-of-the-art Arabic and English models in prompt-specific and cross-prompt settings. LAILA fills a critical need in Arabic AES research, supporting the development of robust scoring systems.
Similar Papers
Enhancing Arabic Automated Essay Scoring with Synthetic Data and Error Injection
Computation and Language
Teaches computers to grade Arabic essays better.
EssayJudge: A Multi-Granular Benchmark for Assessing Automated Essay Scoring Capabilities of Multimodal Large Language Models
Computation and Language
Helps computers grade essays better, even with pictures.
How well can LLMs Grade Essays in Arabic?
Computation and Language
Helps computers grade Arabic essays better.