Mind the Generative Details: Direct Localized Detail Preference Optimization for Video Diffusion Models
By: Zitong Huang , Kaidong Zhang , Yukang Ding and more
Potential Business Impact:
Makes AI videos look more real and flow better.
Aligning text-to-video diffusion models with human preferences is crucial for generating high-quality videos. Existing Direct Preference Otimization (DPO) methods rely on multi-sample ranking and task-specific critic models, which is inefficient and often yields ambiguous global supervision. To address these limitations, we propose LocalDPO, a novel post-training framework that constructs localized preference pairs from real videos and optimizes alignment at the spatio-temporal region level. We design an automated pipeline to efficiently collect preference pair data that generates preference pairs with a single inference per prompt, eliminating the need for external critic models or manual annotation. Specifically, we treat high-quality real videos as positive samples and generate corresponding negatives by locally corrupting them with random spatio-temporal masks and restoring only the masked regions using the frozen base model. During training, we introduce a region-aware DPO loss that restricts preference learning to corrupted areas for rapid convergence. Experiments on Wan2.1 and CogVideoX demonstrate that LocalDPO consistently improves video fidelity, temporal coherence and human preference scores over other post-training approaches, establishing a more efficient and fine-grained paradigm for video generator alignment.
Similar Papers
Mind the Generative Details: Direct Localized Detail Preference Optimization for Video Diffusion Models
CV and Pattern Recognition
Makes AI videos look more real and flow better.
DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models
CV and Pattern Recognition
Makes AI videos move better with less data.
Discriminator-Free Direct Preference Optimization for Video Diffusion
CV and Pattern Recognition
Makes videos look better by learning from mistakes.