Lightweight Wasserstein Audio-Visual Model for Unified Speech Enhancement and Separation
By: Jisoo Park , Seonghak Lee , Guisik Kim and more
Potential Business Impact:
Cleans up noisy and overlapping voices.
Speech Enhancement (SE) and Speech Separation (SS) have traditionally been treated as distinct tasks in speech processing. However, real-world audio often involves both background noise and overlapping speakers, motivating the need for a unified solution. While recent approaches have attempted to integrate SE and SS within multi-stage architectures, these approaches typically involve complex, parameter-heavy models and rely on supervised training, limiting scalability and generalization. In this work, we propose UniVoiceLite, a lightweight and unsupervised audio-visual framework that unifies SE and SS within a single model. UniVoiceLite leverages lip motion and facial identity cues to guide speech extraction and employs Wasserstein distance regularization to stabilize the latent space without requiring paired noisy-clean data. Experimental results demonstrate that UniVoiceLite achieves strong performance in both noisy and multi-speaker scenarios, combining efficiency with robust generalization. The source code is available at https://github.com/jisoo-o/UniVoiceLite.
Similar Papers
Diffusion-Based Unsupervised Audio-Visual Speech Separation in Noisy Environments with Noise Prior
Audio and Speech Processing
Cleans up noisy audio to hear voices better.
A Fast and Lightweight Model for Causal Audio-Visual Speech Separation
Sound
Lets computers hear one voice in a noisy room.
UniSE: A Unified Framework for Decoder-only Autoregressive LM-based Speech Enhancement
Sound
Cleans up noisy audio for many tasks.