Cross-Frame Representation Alignment for Fine-Tuning Video Diffusion Models
By: Sungwon Hwang , Hyojin Jang , Kinam Kim and more
Potential Business Impact:
Makes AI videos look real and consistent.
Fine-tuning Video Diffusion Models (VDMs) at the user level to generate videos that reflect specific attributes of training data presents notable challenges, yet remains underexplored despite its practical importance. Meanwhile, recent work such as Representation Alignment (REPA) has shown promise in improving the convergence and quality of DiT-based image diffusion models by aligning, or assimilating, its internal hidden states with external pretrained visual features, suggesting its potential for VDM fine-tuning. In this work, we first propose a straightforward adaptation of REPA for VDMs and empirically show that, while effective for convergence, it is suboptimal in preserving semantic consistency across frames. To address this limitation, we introduce Cross-frame Representation Alignment (CREPA), a novel regularization technique that aligns hidden states of a frame with external features from neighboring frames. Empirical evaluations on large-scale VDMs, including CogVideoX-5B and Hunyuan Video, demonstrate that CREPA improves both visual fidelity and cross-frame semantic coherence when fine-tuned with parameter-efficient methods such as LoRA. We further validate CREPA across diverse datasets with varying attributes, confirming its broad applicability.
Similar Papers
VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation Models
CV and Pattern Recognition
Makes videos follow real-world physics rules.
U-REPA: Aligning Diffusion U-Nets to ViTs
CV and Pattern Recognition
Makes AI create pictures much faster.
Align & Invert: Solving Inverse Problems with Diffusion and Flow-based Models via Representational Alignment
CV and Pattern Recognition
Makes blurry pictures clear and sharp.