!MSA at BAREC Shared Task 2025: Ensembling Arabic Transformers for Readability Assessment
By: Mohamed Basem , Mohamed Younes , Seif Ahmed and more
Potential Business Impact:
Helps computers understand hard Arabic text better.
We present MSAs winning system for the BAREC 2025 Shared Task on fine-grained Arabic readability assessment, achieving first place in six of six tracks. Our approach is a confidence-weighted ensemble of four complementary transformer models (AraBERTv2, AraELECTRA, MARBERT, and CAMeLBERT) each fine-tuned with distinct loss functions to capture diverse readability signals. To tackle severe class imbalance and data scarcity, we applied weighted training, advanced preprocessing, SAMER corpus relabeling with our strongest model, and synthetic data generation via Gemini 2.5 Flash, adding about 10,000 rare-level samples. A targeted post-processing step corrected prediction distribution skew, delivering a 6.3 percent Quadratic Weighted Kappa (QWK) gain. Our system reached 87.5 percent QWK at the sentence level and 87.4 percent at the document level, demonstrating the power of model and loss diversity, confidence-informed fusion, and intelligent augmentation for robust Arabic readability prediction.
Similar Papers
mucAI at BAREC Shared Task 2025: Towards Uncertainty Aware Arabic Readability Assessment
Computation and Language
Helps grade Arabic text difficulty more accurately.
!MSA at AraHealthQA 2025 Shared Task: Enhancing LLM Performance for Arabic Clinical Question Answering through Prompt Engineering and Ensemble Learning
Computation and Language
Helps doctors answer health questions in Arabic.
BUSTED at AraGenEval Shared Task: A Comparative Study of Transformer-Based Models for Arabic AI-Generated Text Detection
Computation and Language
Finds fake Arabic writing using smart computer programs.