Advancing Arabic Speech Recognition Through Large-Scale Weakly Supervised Learning
By: Mahmoud Salhab , Marwan Elghitany , Shameed Sait and more
Potential Business Impact:
Lets computers understand Arabic speech without human help.
Automatic speech recognition (ASR) is crucial for human-machine interaction in diverse applications like conversational agents, industrial robotics, call center automation, and automated subtitling. However, developing high-performance ASR models remains challenging, particularly for low-resource languages like Arabic, due to the scarcity of large, labeled speech datasets, which are costly and labor-intensive to produce. In this work, we employ weakly supervised learning to train an Arabic ASR model using the Conformer architecture. Our model is trained from scratch on 15,000 hours of weakly annotated speech data covering both Modern Standard Arabic (MSA) and Dialectal Arabic (DA), eliminating the need for costly manual transcriptions. Despite the absence of human-verified labels, our approach achieves state-of-the-art (SOTA) results in Arabic ASR, surpassing both open and closed-source models on standard benchmarks. By demonstrating the effectiveness of weak supervision as a scalable, cost-efficient alternative to traditional supervised approaches, paving the way for improved ASR systems in low resource settings.
Similar Papers
Munsit at NADI 2025 Shared Task 2: Pushing the Boundaries of Multidialectal Arabic ASR with Weakly Supervised Pretraining and Continual Supervised Fine-tuning
Computation and Language
Helps computers understand many Arabic accents.
Arabic ASR on the SADA Large-Scale Arabic Speech Corpus with Transformer-Based Models
Audio and Speech Processing
Helps computers understand different Arabic accents better.
Efficient ASR for Low-Resource Languages: Leveraging Cross-Lingual Unlabeled Data
Computation and Language
Lets computers understand rare languages better.