Score: 1

Better Semi-supervised Learning for Multi-domain ASR Through Incremental Retraining and Data Filtering

Published: June 5, 2025 | arXiv ID: 2506.04981v1

By: Andres Carofilis , Pradeep Rangappa , Srikanth Madikeri and more

Potential Business Impact:

Teaches computers to hear better with less data.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Fine-tuning pretrained ASR models for specific domains is challenging when labeled data is scarce. But unlabeled audio and labeled data from related domains are often available. We propose an incremental semi-supervised learning pipeline that first integrates a small in-domain labeled set and an auxiliary dataset from a closely related domain, achieving a relative improvement of 4% over no auxiliary data. Filtering based on multi-model consensus or named entity recognition (NER) is then applied to select and iteratively refine pseudo-labels, showing slower performance saturation compared to random selection. Evaluated on the multi-domain Wow call center and Fisher English corpora, it outperforms single-step fine-tuning. Consensus-based filtering outperforms other methods, providing up to 22.3% relative improvement on Wow and 24.8% on Fisher over single-step fine-tuning with random selection. NER is the second-best filter, providing competitive performance at a lower computational cost.


Page Count
5 pages

Category
Computer Science:
Computation and Language