OmniMotion: Multimodal Motion Generation with Continuous Masked Autoregression
By: Zhe Li , Weihao Yuan , Weichao Shen and more
Potential Business Impact:
Makes characters move realistically from text, speech, music.
Whole-body multi-modal human motion generation poses two primary challenges: creating an effective motion generation mechanism and integrating various modalities, such as text, speech, and music, into a cohesive framework. Unlike previous methods that usually employ discrete masked modeling or autoregressive modeling, we develop a continuous masked autoregressive motion transformer, where a causal attention is performed considering the sequential nature within the human motion. Within this transformer, we introduce a gated linear attention and an RMSNorm module, which drive the transformer to pay attention to the key actions and suppress the instability caused by either the abnormal movements or the heterogeneous distributions within multi-modalities. To further enhance both the motion generation and the multimodal generalization, we employ the DiT structure to diffuse the conditions from the transformer towards the targets. To fuse different modalities, AdaLN and cross-attention are leveraged to inject the text, speech, and music signals. Experimental results demonstrate that our framework outperforms previous methods across all modalities, including text-to-motion, speech-to-gesture, and music-to-dance. The code of our method will be made public.
Similar Papers
OmniMotion-X: Versatile Multimodal Whole-Body Motion Generation
CV and Pattern Recognition
Makes characters move realistically from text or music.
OmniMoGen: Unifying Human Motion Generation via Learning from Interleaved Text-Motion Instructions
CV and Pattern Recognition
Creates any dance move from simple words.
MIDAS: Multimodal Interactive Digital-humAn Synthesis via Real-time Autoregressive Video Generation
CV and Pattern Recognition
Creates talking videos from sound, pose, and text.