Mamba-Diffusion Model with Learnable Wavelet for Controllable Symbolic Music Generation
By: Jincheng Zhang, György Fazekas, Charalampos Saitis
Potential Business Impact:
Makes computers write music like a pro.
The recent surge in the popularity of diffusion models for image synthesis has attracted new attention to their potential for generation tasks in other domains. However, their applications to symbolic music generation remain largely under-explored because symbolic music is typically represented as sequences of discrete events and standard diffusion models are not well-suited for discrete data. We represent symbolic music as image-like pianorolls, facilitating the use of diffusion models for the generation of symbolic music. Moreover, this study introduces a novel diffusion model that incorporates our proposed Transformer-Mamba block and learnable wavelet transform. Classifier-free guidance is utilised to generate symbolic music with target chords. Our evaluation shows that our method achieves compelling results in terms of music quality and controllability, outperforming the strong baseline in pianoroll generation. Our code is available at https://github.com/jinchengzhanggg/proffusion.
Similar Papers
Conditional Diffusion as Latent Constraints for Controllable Symbolic Music Generation
Machine Learning (CS)
Lets musicians precisely control music creation.
LZMidi: Compression-Based Symbolic Music Generation
Sound
Makes music faster and cheaper on normal computers.
Versatile Symbolic Music-for-Music Modeling via Function Alignment
Sound
AI writes music by learning music's own language.