PromptSep: Generative Audio Separation via Multimodal Prompting
By: Yutong Wen , Ke Chen , Prem Seetharaman and more
Potential Business Impact:
Lets you remove or pick sounds using voice.
Recent breakthroughs in language-queried audio source separation (LASS) have shown that generative models can achieve higher separation audio quality than traditional masking-based approaches. However, two key limitations restrict their practical use: (1) users often require operations beyond separation, such as sound removal; and (2) relying solely on text prompts can be unintuitive for specifying sound sources. In this paper, we propose PromptSep to extend LASS into a broader framework for general-purpose sound separation. PromptSep leverages a conditional diffusion model enhanced with elaborated data simulation to enable both audio extraction and sound removal. To move beyond text-only queries, we incorporate vocal imitation as an additional and more intuitive conditioning modality for our model, by incorporating Sketch2Sound as a data augmentation strategy. Both objective and subjective evaluations on multiple benchmarks demonstrate that PromptSep achieves state-of-the-art performance in sound removal and vocal-imitation-guided source separation, while maintaining competitive results on language-queried source separation.
Similar Papers
Unsupervised Single-Channel Speech Separation with a Diffusion Prior under Speaker-Embedding Guidance
Audio and Speech Processing
Separates voices from mixed sounds using AI.
Efficient and Fast Generative-Based Singing Voice Separation using a Latent Diffusion Model
Sound
Separates singing voice from music perfectly.
Neural Audio Codecs for Prompt-Driven Universal Source Separation
Sound
Lets phones separate music from speech.