Listen to Rhythm, Choose Movements: Autoregressive Multimodal Dance Generation via Diffusion and Mamba with Decoupled Dance Dataset
By: Oran Duan , Yinghua Shen , Yingzhu Lv and more
Potential Business Impact:
Makes computer dances follow music and instructions.
Advances in generative models and sequence learning have greatly promoted research in dance motion generation, yet current methods still suffer from coarse semantic control and poor coherence in long sequences. In this work, we present Listen to Rhythm, Choose Movements (LRCM), a multimodal-guided diffusion framework supporting both diverse input modalities and autoregressive dance motion generation. We explore a feature decoupling paradigm for dance datasets and generalize it to the Motorica Dance dataset, separating motion capture data, audio rhythm, and professionally annotated global and local text descriptions. Our diffusion architecture integrates an audio-latent Conformer and a text-latent Cross-Conformer, and incorporates a Motion Temporal Mamba Module (MTMM) to enable smooth, long-duration autoregressive synthesis. Experimental results indicate that LRCM delivers strong performance in both functional capability and quantitative metrics, demonstrating notable potential in multimodal input scenarios and extended sequence generation. We will release the full codebase, dataset, and pretrained models publicly upon acceptance.
Similar Papers
DanceChat: Large Language Model-Guided Music-to-Dance Generation
CV and Pattern Recognition
Makes music turn into cool dance moves.
DanceMosaic: High-Fidelity Dance Generation with Multimodal Editability
Graphics
Creates realistic, editable 3D dances from music and text.
OmniMotion: Multimodal Motion Generation with Continuous Masked Autoregression
CV and Pattern Recognition
Makes characters move realistically from text, speech, music.