Score: 0

Explainable Disentanglement on Discrete Speech Representations for Noise-Robust ASR

Published: October 29, 2025 | arXiv ID: 2510.25150v1

By: Shreyas Gopal , Ashutosh Anshul , Haoyang Li and more

Potential Business Impact:

Cleans noisy speech for better understanding.

Business Areas:
Speech Recognition Data and Analytics, Software

Discrete audio representations are gaining traction in speech modeling due to their interpretability and compatibility with large language models, but are not always optimized for noisy or real-world environments. Building on existing works that quantize Whisper embeddings for speech-to-unit modeling, we propose disentangling semantic speech content from background noise in the latent space. Our end-to-end model separates clean speech in the form of codebook tokens, while extracting interpretable noise vectors as quantization residue which are supervised via a lightweight classifier. We show that our approach improves alignment between clean/noisy speech and text, producing speech tokens that display a high degree of noiseinvariance, and improves ASR performance. Keeping Whisper frozen, we show an 82% reduction in error rate compared to Whisper, and 35% improvement over baseline methods on the VBDemand test set. Further analyses show that the learned token space generalizes well to both seen and unseen acoustic conditions.

Country of Origin
πŸ‡ΈπŸ‡¬ Singapore

Page Count
6 pages

Category
Computer Science:
Computation and Language