Towards stable AI systems for Evaluating Arabic Pronunciations
By: Hadi Zaatiti , Hatem Hajri , Osama Abdullah and more
Potential Business Impact:
Teaches computers to understand Arabic letter sounds.
Modern Arabic ASR systems such as wav2vec 2.0 excel at word- and sentence-level transcription, yet struggle to classify isolated letters. In this study, we show that this phoneme-level task, crucial for language learning, speech therapy, and phonetic research, is challenging because isolated letters lack co-articulatory cues, provide no lexical context, and last only a few hundred milliseconds. Recogniser systems must therefore rely solely on variable acoustic cues, a difficulty heightened by Arabic's emphatic (pharyngealized) consonants and other sounds with no close analogues in many languages. This study introduces a diverse, diacritised corpus of isolated Arabic letters and demonstrates that state-of-the-art wav2vec 2.0 models achieve only 35% accuracy on it. Training a lightweight neural network on wav2vec embeddings raises performance to 65%. However, adding a small amplitude perturbation (epsilon = 0.05) cuts accuracy to 32%. To restore robustness, we apply adversarial training, limiting the noisy-speech drop to 9% while preserving clean-speech accuracy. We detail the corpus, training pipeline, and evaluation protocol, and release, on demand, data and code for reproducibility. Finally, we outline future work extending these methods to word- and sentence-level frameworks, where precise letter pronunciation remains critical.
Similar Papers
Enhancing Quranic Learning: A Multimodal Deep Learning Approach for Arabic Phoneme Recognition
Sound
Helps computers check Arabic pronunciation perfectly.
Speaker Diarization for Low-Resource Languages Through Wav2vec Fine-Tuning
Sound
Helps computers tell who is talking in any language.
Automatic Pronunciation Error Detection and Correction of the Holy Quran's Learners Using Deep Learning
Audio and Speech Processing
Teaches computers to judge Quran reading perfectly.