CoMoVi: Co-Generation of 3D Human Motions and Realistic Videos
By: Chengfeng Zhao , Jiazhi Shu , Yubo Zhao and more
In this paper, we find that the generation of 3D human motions and 2D human videos is intrinsically coupled. 3D motions provide the structural prior for plausibility and consistency in videos, while pre-trained video models offer strong generalization capabilities for motions, which necessitate coupling their generation processes. Based on this, we present CoMoVi, a co-generative framework that couples two video diffusion models (VDMs) to generate 3D human motions and videos synchronously within a single diffusion denoising loop. To achieve this, we first propose an effective 2D human motion representation that can inherit the powerful prior of pre-trained VDMs. Then, we design a dual-branch diffusion model to couple human motion and video generation process with mutual feature interaction and 3D-2D cross attentions. Moreover, we curate CoMoVi Dataset, a large-scale real-world human video dataset with text and motion annotations, covering diverse and challenging human motions. Extensive experiments demonstrate the effectiveness of our method in both 3D human motion and video generation tasks.
Similar Papers
MoCo: Motion-Consistent Human Video Generation via Structure-Appearance Decoupling
CV and Pattern Recognition
Makes videos of people move realistically from words.
CoMo: Compositional Motion Customization for Text-to-Video Generation
CV and Pattern Recognition
Makes videos show many actions at once.
The Quest for Generalizable Motion Generation: Data, Model, and Evaluation
CV and Pattern Recognition
Makes computer-made people move more realistically.