Speech Language Models for Under-Represented Languages: Insights from Wolof
By: Yaya Sy , Dioula Doucouré , Christophe Cerisara and more
Potential Business Impact:
Helps computers understand and translate Wolof speech.
We present our journey in training a speech language model for Wolof, an underrepresented language spoken in West Africa, and share key insights. We first emphasize the importance of collecting large-scale, spontaneous, high-quality unsupervised speech data, and show that continued pretraining HuBERT on this dataset outperforms both the base model and African-centric models on ASR. We then integrate this speech encoder into a Wolof LLM to train the first Speech LLM for this language, extending its capabilities to tasks such as speech translation. Furthermore, we explore training the Speech LLM to perform multi-step Chain-of-Thought before transcribing or translating. Our results show that the Speech LLM not only improves speech recognition but also performs well in speech translation. The models and the code will be openly shared.
Similar Papers
The State of Large Language Models for African Languages: Progress and Challenges
Artificial Intelligence
Helps computers understand more African languages.
Dealing with the Hard Facts of Low-Resource African NLP
Computation and Language
Helps computers understand a rare language.
Lugha-Llama: Adapting Large Language Models for African Languages
Computation and Language
Teaches computers to understand African languages better.