ProGress: Structured Music Generation via Graph Diffusion and Hierarchical Music Analysis
By: Stephen Ni-Hahn , Chao Péter Yang , Mingchen Ma and more
Potential Business Impact:
Makes AI create music that sounds good.
Artificial Intelligence (AI) for music generation is undergoing rapid developments, with recent symbolic models leveraging sophisticated deep learning and diffusion model algorithms. One drawback with existing models is that they lack structural cohesion, particularly on harmonic-melodic structure. Furthermore, such existing models are largely "black-box" in nature and are not musically interpretable. This paper addresses these limitations via a novel generative music framework that incorporates concepts of Schenkerian analysis (SchA) in concert with a diffusion modeling framework. This framework, which we call ProGress (Prolongation-enhanced DiGress), adapts state-of-the-art deep models for discrete diffusion (in particular, the DiGress model of Vignac et al., 2023) for interpretable and structured music generation. Concretely, our contributions include 1) novel adaptations of the DiGress model for music generation, 2) a novel SchA-inspired phrase fusion methodology, and 3) a framework allowing users to control various aspects of the generation process to create coherent musical compositions. Results from human experiments suggest superior performance to existing state-of-the-art methods.
Similar Papers
MusicScaffold: Bridging Machine Efficiency and Human Growth in Adolescent Creative Education through Generative AI
Human-Computer Interaction
Helps teens learn music by guiding AI.
From Generation to Attribution: Music AI Agent Architectures for the Post-Streaming Era
Information Retrieval
Lets AI music pay artists fairly.
From Generation to Attribution: Music AI Agent Architectures for the Post-Streaming Era
Information Retrieval
Fairly pays artists for AI-made music.