MoAlign: Motion-Centric Representation Alignment for Video Diffusion Models
By: Aritra Bhowmik , Denis Korzhenkov , Cees G. M. Snoek and more
Potential Business Impact:
Makes videos look more real and move naturally.
Text-to-video diffusion models have enabled high-quality video synthesis, yet often fail to generate temporally coherent and physically plausible motion. A key reason is the models' insufficient understanding of complex motions that natural videos often entail. Recent works tackle this problem by aligning diffusion model features with those from pretrained video encoders. However, these encoders mix video appearance and dynamics into entangled features, limiting the benefit of such alignment. In this paper, we propose a motion-centric alignment framework that learns a disentangled motion subspace from a pretrained video encoder. This subspace is optimized to predict ground-truth optical flow, ensuring it captures true motion dynamics. We then align the latent features of a text-to-video diffusion model to this new subspace, enabling the generative model to internalize motion knowledge and generate more plausible videos. Our method improves the physical commonsense in a state-of-the-art video diffusion model, while preserving adherence to textual prompts, as evidenced by empirical evaluations on VideoPhy, VideoPhy2, VBench, and VBench-2.0, along with a user study.
Similar Papers
ReAlign: Text-to-Motion Generation via Step-Aware Reward-Guided Alignment
CV and Pattern Recognition
Makes computer-made people move like real people.
Improving Video Diffusion Transformer Training by Multi-Feature Fusion and Alignment from Self-Supervised Vision Encoders
CV and Pattern Recognition
Makes AI videos look more real and smooth.
FlowMo: Variance-Based Flow Guidance for Coherent Motion in Video Generation
CV and Pattern Recognition
Makes videos move more smoothly and realistically.