Score: 0

Diffusion-Based Unsupervised Audio-Visual Speech Separation in Noisy Environments with Noise Prior

Published: September 17, 2025 | arXiv ID: 2509.14379v1

By: Yochai Yemini , Rami Ben-Ari , Sharon Gannot and more

Potential Business Impact:

Cleans up noisy audio to hear voices better.

Business Areas:
Speech Recognition Data and Analytics, Software

In this paper, we address the problem of single-microphone speech separation in the presence of ambient noise. We propose a generative unsupervised technique that directly models both clean speech and structured noise components, training exclusively on these individual signals rather than noisy mixtures. Our approach leverages an audio-visual score model that incorporates visual cues to serve as a strong generative speech prior. By explicitly modelling the noise distribution alongside the speech distribution, we enable effective decomposition through the inverse problem paradigm. We perform speech separation by sampling from the posterior distributions via a reverse diffusion process, which directly estimates and removes the modelled noise component to recover clean constituent signals. Experimental results demonstrate promising performance, highlighting the effectiveness of our direct noise modelling approach in challenging acoustic environments.

Country of Origin
🇮🇱 Israel

Page Count
5 pages

Category
Electrical Engineering and Systems Science:
Audio and Speech Processing