ZeroSep: Separate Anything in Audio with Zero Training
By: Chao Huang , Yuesheng Ma , Junxuan Huang and more
Potential Business Impact:
Lets computers hear one sound in noisy places.
Audio source separation is fundamental for machines to understand complex acoustic environments and underpins numerous audio applications. Current supervised deep learning approaches, while powerful, are limited by the need for extensive, task-specific labeled data and struggle to generalize to the immense variability and open-set nature of real-world acoustic scenes. Inspired by the success of generative foundation models, we investigate whether pre-trained text-guided audio diffusion models can overcome these limitations. We make a surprising discovery: zero-shot source separation can be achieved purely through a pre-trained text-guided audio diffusion model under the right configuration. Our method, named ZeroSep, works by inverting the mixed audio into the diffusion model's latent space and then using text conditioning to guide the denoising process to recover individual sources. Without any task-specific training or fine-tuning, ZeroSep repurposes the generative diffusion model for a discriminative separation task and inherently supports open-set scenarios through its rich textual priors. ZeroSep is compatible with a variety of pre-trained text-guided audio diffusion backbones and delivers strong separation performance on multiple separation benchmarks, surpassing even supervised methods.
Similar Papers
Unsupervised Single-Channel Speech Separation with a Diffusion Prior under Speaker-Embedding Guidance
Audio and Speech Processing
Separates voices from mixed sounds using AI.
Unsupervised Single-Channel Audio Separation with Diffusion Source Priors
Audio and Speech Processing
Separates sounds from recordings without needing perfect examples.
Unsupervised Single-Channel Audio Separation with Diffusion Source Priors
Audio and Speech Processing
Separates music into individual instruments without needing original recordings.