Reg-DPO: SFT-Regularized Direct Preference Optimization with GT-Pair for Improving Video Generation
By: Jie Du , Xinyu Gong , Qingshan Tan and more
Potential Business Impact:
Makes AI create better videos without human help.
Recent studies have identified Direct Preference Optimization (DPO) as an efficient and reward-free approach to improving video generation quality. However, existing methods largely follow image-domain paradigms and are mainly developed on small-scale models (approximately 2B parameters), limiting their ability to address the unique challenges of video tasks, such as costly data construction, unstable training, and heavy memory consumption. To overcome these limitations, we introduce a GT-Pair that automatically builds high-quality preference pairs by using real videos as positives and model-generated videos as negatives, eliminating the need for any external annotation. We further present Reg-DPO, which incorporates the SFT loss as a regularization term into the DPO loss to enhance training stability and generation fidelity. Additionally, by combining the FSDP framework with multiple memory optimization techniques, our approach achieves nearly three times higher training capacity than using FSDP alone. Extensive experiments on both I2V and T2V tasks across multiple datasets demonstrate that our method consistently outperforms existing approaches, delivering superior video generation quality.
Similar Papers
Reg-DPO: SFT-Regularized Direct Preference Optimization with GT-Pair for Improving Video Generation
CV and Pattern Recognition
Makes AI create better videos, faster and cheaper.
RealDPO: Real or Not Real, that is the Preference
CV and Pattern Recognition
Makes computer-made videos move more like real life.
DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models
CV and Pattern Recognition
Makes AI videos move better with less data.