Score: 1

Benchmarking Automatic Speech Recognition Models for African Languages

Published: November 30, 2025 | arXiv ID: 2512.10968v1

By: Alvin Nahabwe , Sulaiman Kagumire , Denis Musinguzi and more

Potential Business Impact:

Helps computers understand many African languages.

Business Areas:
Speech Recognition Data and Analytics, Software

Automatic speech recognition (ASR) for African languages remains constrained by limited labeled data and the lack of systematic guidance on model selection, data scaling, and decoding strategies. Large pre-trained systems such as Whisper, XLS-R, MMS, and W2v-BERT have expanded access to ASR technology, but their comparative behavior in African low-resource contexts has not been studied in a unified and systematic way. In this work, we benchmark four state-of-the-art ASR models across 13 African languages, fine-tuning them on progressively larger subsets of transcribed data ranging from 1 to 400 hours. Beyond reporting error rates, we provide new insights into why models behave differently under varying conditions. We show that MMS and W2v-BERT are more data efficient in very low-resource regimes, XLS-R scales more effectively as additional data becomes available, and Whisper demonstrates advantages in mid-resource conditions. We also analyze where external language model decoding yields improvements and identify cases where it plateaus or introduces additional errors, depending on the alignment between acoustic and text resources. By highlighting the interaction between pre-training coverage, model architecture, dataset domain, and resource availability, this study offers practical and insights into the design of ASR systems for underrepresented languages.

Country of Origin
πŸ‡ΊπŸ‡¬ Uganda

Page Count
19 pages

Category
Computer Science:
Computation and Language