VistaDPO: Video Hierarchical Spatial-Temporal Direct Preference Optimization for Large Video Models
By: Haojian Huang , Haodong Chen , Shengqiong Wu and more
Potential Business Impact:
Makes AI understand videos better, like people do.
Large Video Models (LVMs) built upon Large Language Models (LLMs) have shown promise in video understanding but often suffer from misalignment with human intuition and video hallucination issues. To address these challenges, we introduce VistaDPO, a novel framework for Video Hierarchical Spatial-Temporal Direct Preference Optimization. VistaDPO enhances text-video preference alignment across three hierarchical levels: i) Instance Level, aligning overall video content with responses; ii) Temporal Level, aligning video temporal semantics with event descriptions; and iii) Perceptive Level, aligning spatial objects with language tokens. Given the lack of datasets for fine-grained video-language preference alignment, we construct VistaDPO-7k, a dataset of 7.2K QA pairs annotated with chosen and rejected responses, along with spatial-temporal grounding information such as timestamps, keyframes, and bounding boxes. Extensive experiments on benchmarks such as Video Hallucination, Video QA, and Captioning performance tasks demonstrate that VistaDPO significantly improves the performance of existing LVMs, effectively mitigating video-language misalignment and hallucination. The code and data are available at https://github.com/HaroldChen19/VistaDPO.
Similar Papers
DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models
CV and Pattern Recognition
Makes AI videos move better with less data.
PaMi-VDPO: Mitigating Video Hallucinations by Prompt-Aware Multi-Instance Video Preference Learning
CV and Pattern Recognition
Teaches AI to describe videos without making things up.
Discriminator-Free Direct Preference Optimization for Video Diffusion
CV and Pattern Recognition
Makes videos look better by learning from mistakes.