A Practitioner's Guide to Building ASR Models for Low-Resource Languages: A Case Study on Scottish Gaelic
By: Ondřej Klejch, William Lamb, Peter Bell
Potential Business Impact:
Teaches computers to understand rare languages better.
An effective approach to the development of ASR systems for low-resource languages is to fine-tune an existing multilingual end-to-end model. When the original model has been trained on large quantities of data from many languages, fine-tuning can be effective with limited training data, even when the language in question was not present in the original training data. The fine-tuning approach has been encouraged by the availability of public-domain E2E models and is widely believed to lead to state-of-the-art results. This paper, however, challenges that belief. We show that an approach combining hybrid HMMs with self-supervised models can yield substantially better performance with limited training data. This combination allows better utilisation of all available speech and text data through continued self-supervised pre-training and semi-supervised training. We benchmark our approach on Scottish Gaelic, achieving WER reductions of 32% relative over our best fine-tuned Whisper model.
Similar Papers
How I Built ASR for Endangered Languages with a Spoken Dictionary
Computation and Language
Helps save dying languages with less speech data.
Whispering in Amharic: Fine-tuning Whisper for Low-resource Language
Computation and Language
Helps computers understand Amharic speech better.
Fine Tuning Methods for Low-resource Languages
Computation and Language
Helps AI understand and use other languages better.