ELYADATA & LIA at NADI 2025: ASR and ADI Subtasks
By: Haroun Elleuch , Youssef Saidi , Salima Mdhaffar and more
Potential Business Impact:
Helps computers understand different Arabic accents better.
This paper describes Elyadata \& LIA's joint submission to the NADI multi-dialectal Arabic Speech Processing 2025. We participated in the Spoken Arabic Dialect Identification (ADI) and multi-dialectal Arabic ASR subtasks. Our submission ranked first for the ADI subtask and second for the multi-dialectal Arabic ASR subtask among all participants. Our ADI system is a fine-tuned Whisper-large-v3 encoder with data augmentation. This system obtained the highest ADI accuracy score of \textbf{79.83\%} on the official test set. For multi-dialectal Arabic ASR, we fine-tuned SeamlessM4T-v2 Large (Egyptian variant) separately for each of the eight considered dialects. Overall, we obtained an average WER and CER of \textbf{38.54\%} and \textbf{14.53\%}, respectively, on the test set. Our results demonstrate the effectiveness of large pre-trained speech models with targeted fine-tuning for Arabic speech processing.
Similar Papers
NADI 2025: The First Multidialectal Arabic Speech Processing Shared Task
Computation and Language
Helps computers understand different Arabic accents.
NADI 2025: The First Multidialectal Arabic Speech Processing Shared Task
Computation and Language
Helps computers understand different Arabic accents.
ADI-20: Arabic Dialect Identification dataset and models
Computation and Language
Helps computers understand all Arabic accents.