Score: 1

A Self-Refining Framework for Enhancing ASR Using TTS-Synthesized Data

Published: June 10, 2025 | arXiv ID: 2506.11130v2

By: Cheng-Kang Chou , Chan-Jan Hsu , Ho-Lam Chung and more

Potential Business Impact:

Makes voice assistants understand more words.

Business Areas:
Speech Recognition Data and Analytics, Software

We propose a self-refining framework that enhances ASR performance with only unlabeled datasets. The process starts with an existing ASR model generating pseudo-labels on unannotated speech, which are then used to train a high-fidelity text-to-speech (TTS) system. Then, synthesized speech text pairs are bootstrapped into the original ASR system, completing the closed-loop self-improvement cycle. We demonstrated the effectiveness of the framework on Taiwanese Mandarin speech. Leveraging 6,000 hours of unlabeled speech, a moderate amount of text data, and synthetic content from the AI models, we adapt Whisper-large-v2 into a specialized model, Twister. Twister reduces error rates by up to 20% on Mandarin and 50% on Mandarin-English code-switching benchmarks compared to Whisper. Results highlight the framework as a compelling alternative to pseudo-labeling self-distillation approaches and provides a practical pathway for improving ASR performance in low-resource or domain-specific settings.

Country of Origin
🇹🇼 Taiwan, Province of China

Page Count
8 pages

Category
Computer Science:
Computation and Language