MGE-LDM: Joint Latent Diffusion for Simultaneous Music Generation and Source Extraction
By: Yunkee Chae, Kyogu Lee
Potential Business Impact:
Creates and separates any music sounds.
We present MGE-LDM, a unified latent diffusion framework for simultaneous music generation, source imputation, and query-driven source separation. Unlike prior approaches constrained to fixed instrument classes, MGE-LDM learns a joint distribution over full mixtures, submixtures, and individual stems within a single compact latent diffusion model. At inference, MGE-LDM enables (1) complete mixture generation, (2) partial generation (i.e., source imputation), and (3) text-conditioned extraction of arbitrary sources. By formulating both separation and imputation as conditional inpainting tasks in the latent space, our approach supports flexible, class-agnostic manipulation of arbitrary instrument sources. Notably, MGE-LDM can be trained jointly across heterogeneous multi-track datasets (e.g., Slakh2100, MUSDB18, MoisesDB) without relying on predefined instrument categories. Audio samples are available at our project page: https://yoongi43.github.io/MGELDM_Samples/.
Similar Papers
Boosting Generative Image Modeling via Joint Image-Feature Synthesis
CV and Pattern Recognition
Creates better pictures by understanding what they mean.
Efficient and Fast Generative-Based Singing Voice Separation using a Latent Diffusion Model
Sound
Separates singing voice from music perfectly.
Multi-focal Conditioned Latent Diffusion for Person Image Synthesis
CV and Pattern Recognition
Makes AI create realistic people pictures.