CoMo: Compositional Motion Customization for Text-to-Video Generation
By: Youcan Xu , Zhen Wang , Jiaxin Shi and more
Potential Business Impact:
Makes videos show many actions at once.
While recent text-to-video models excel at generating diverse scenes, they struggle with precise motion control, particularly for complex, multi-subject motions. Although methods for single-motion customization have been developed to address this gap, they fail in compositional scenarios due to two primary challenges: motion-appearance entanglement and ineffective multi-motion blending. This paper introduces CoMo, a novel framework for $\textbf{compositional motion customization}$ in text-to-video generation, enabling the synthesis of multiple, distinct motions within a single video. CoMo addresses these issues through a two-phase approach. First, in the single-motion learning phase, a static-dynamic decoupled tuning paradigm disentangles motion from appearance to learn a motion-specific module. Second, in the multi-motion composition phase, a plug-and-play divide-and-merge strategy composes these learned motions without additional training by spatially isolating their influence during the denoising process. To facilitate research in this new domain, we also introduce a new benchmark and a novel evaluation metric designed to assess multi-motion fidelity and blending. Extensive experiments demonstrate that CoMo achieves state-of-the-art performance, significantly advancing the capabilities of controllable video generation. Our project page is at https://como6.github.io/.
Similar Papers
ConMo: Controllable Motion Disentanglement and Recomposition for Zero-Shot Motion Transfer
CV and Pattern Recognition
Lets you move people in videos like puppets.
MoCo: Motion-Consistent Human Video Generation via Structure-Appearance Decoupling
CV and Pattern Recognition
Makes videos of people move realistically from words.
Dense Motion Captioning
CV and Pattern Recognition
Helps computers understand and describe human movements.