Conditional Diffusion as Latent Constraints for Controllable Symbolic Music Generation
By: Matteo Pettenó, Alessandro Ilic Mezza, Alberto Bernardini
Potential Business Impact:
Lets musicians precisely control music creation.
Recent advances in latent diffusion models have demonstrated state-of-the-art performance in high-dimensional time-series data synthesis while providing flexible control through conditioning and guidance. However, existing methodologies primarily rely on musical context or natural language as the main modality of interacting with the generative process, which may not be ideal for expert users who seek precise fader-like control over specific musical attributes. In this work, we explore the application of denoising diffusion processes as plug-and-play latent constraints for unconditional symbolic music generation models. We focus on a framework that leverages a library of small conditional diffusion models operating as implicit probabilistic priors on the latents of a frozen unconditional backbone. While previous studies have explored domain-specific use cases, this work, to the best of our knowledge, is the first to demonstrate the versatility of such an approach across a diverse array of musical attributes, such as note density, pitch range, contour, and rhythm complexity. Our experiments show that diffusion-driven constraints outperform traditional attribute regularization and other latent constraints architectures, achieving significantly stronger correlations between target and generated attributes while maintaining high perceptual quality and diversity.
Similar Papers
Mamba-Diffusion Model with Learnable Wavelet for Controllable Symbolic Music Generation
Sound
Makes computers write music like a pro.
Efficient and Fast Generative-Based Singing Voice Separation using a Latent Diffusion Model
Sound
Separates singing voice from music perfectly.
Softly Constrained Denoisers for Diffusion Models
Machine Learning (CS)
Makes AI create images that follow rules better.