Score: 1

A Practitioner's Guide to Building ASR Models for Low-Resource Languages: A Case Study on Scottish Gaelic

Published: June 5, 2025 | arXiv ID: 2506.04915v1

By: Ondřej Klejch, William Lamb, Peter Bell

Potential Business Impact:

Teaches computers to understand rare languages better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

An effective approach to the development of ASR systems for low-resource languages is to fine-tune an existing multilingual end-to-end model. When the original model has been trained on large quantities of data from many languages, fine-tuning can be effective with limited training data, even when the language in question was not present in the original training data. The fine-tuning approach has been encouraged by the availability of public-domain E2E models and is widely believed to lead to state-of-the-art results. This paper, however, challenges that belief. We show that an approach combining hybrid HMMs with self-supervised models can yield substantially better performance with limited training data. This combination allows better utilisation of all available speech and text data through continued self-supervised pre-training and semi-supervised training. We benchmark our approach on Scottish Gaelic, achieving WER reductions of 32% relative over our best fine-tuned Whisper model.

Country of Origin
🇬🇧 United Kingdom

Page Count
5 pages

Category
Computer Science:
Computation and Language