Explainable Disentanglement on Discrete Speech Representations for Noise-Robust ASR
By: Shreyas Gopal , Ashutosh Anshul , Haoyang Li and more
Potential Business Impact:
Cleans noisy speech for better understanding.
Discrete audio representations are gaining traction in speech modeling due to their interpretability and compatibility with large language models, but are not always optimized for noisy or real-world environments. Building on existing works that quantize Whisper embeddings for speech-to-unit modeling, we propose disentangling semantic speech content from background noise in the latent space. Our end-to-end model separates clean speech in the form of codebook tokens, while extracting interpretable noise vectors as quantization residue which are supervised via a lightweight classifier. We show that our approach improves alignment between clean/noisy speech and text, producing speech tokens that display a high degree of noiseinvariance, and improves ASR performance. Keeping Whisper frozen, we show an 82% reduction in error rate compared to Whisper, and 35% improvement over baseline methods on the VBDemand test set. Further analyses show that the learned token space generalizes well to both seen and unseen acoustic conditions.
Similar Papers
Entropy-based Coarse and Compressed Semantic Speech Representation Learning
Computation and Language
Makes computers understand talking with fewer details.
Phonological Representation Learning for Isolated Signs Improves Out-of-Vocabulary Generalization
Computation and Language
Helps computers understand new sign language words.
Harmonic-Percussive Disentangled Neural Audio Codec for Bandwidth Extension
Sound
Makes old recordings sound clear and new.