A Self-Refining Framework for Enhancing ASR Using TTS-Synthesized Data
By: Cheng-Kang Chou , Chan-Jan Hsu , Ho-Lam Chung and more
Potential Business Impact:
Makes voice assistants understand more words.
We propose a self-refining framework that enhances ASR performance with only unlabeled datasets. The process starts with an existing ASR model generating pseudo-labels on unannotated speech, which are then used to train a high-fidelity text-to-speech (TTS) system. Then, synthesized speech text pairs are bootstrapped into the original ASR system, completing the closed-loop self-improvement cycle. We demonstrated the effectiveness of the framework on Taiwanese Mandarin speech. Leveraging 6,000 hours of unlabeled speech, a moderate amount of text data, and synthetic content from the AI models, we adapt Whisper-large-v2 into a specialized model, Twister. Twister reduces error rates by up to 20% on Mandarin and 50% on Mandarin-English code-switching benchmarks compared to Whisper. Results highlight the framework as a compelling alternative to pseudo-labeling self-distillation approaches and provides a practical pathway for improving ASR performance in low-resource or domain-specific settings.
Similar Papers
Frustratingly Easy Data Augmentation for Low-Resource ASR
Computation and Language
Makes talking computers understand rare languages better.
Transcript-Prompted Whisper with Dictionary-Enhanced Decoding for Japanese Speech Annotation
Computation and Language
Makes computer voices sound more natural.
Better Pseudo-labeling with Multi-ASR Fusion and Error Correction by SpeechLLM
Audio and Speech Processing
Makes computers understand spoken words better.