Dr.V: A Hierarchical Perception-Temporal-Cognition Framework to Diagnose Video Hallucination by Fine-grained Spatial-Temporal Grounding
By: Meng Luo , Shengqiong Wu , Liqiang Jing and more
Potential Business Impact:
Fixes videos that show wrong things.
Recent advancements in large video models (LVMs) have significantly enhance video understanding. However, these models continue to suffer from hallucinations, producing content that conflicts with input videos. To address this issue, we propose Dr.V, a hierarchical framework covering perceptive, temporal, and cognitive levels to diagnose video hallucination by fine-grained spatial-temporal grounding. Dr.V comprises of two key components: a benchmark dataset Dr.V-Bench and a satellite video agent Dr.V-Agent. Dr.V-Bench includes 10k instances drawn from 4,974 videos spanning diverse tasks, each enriched with detailed spatial-temporal annotation. Dr.V-Agent detects hallucinations in LVMs by systematically applying fine-grained spatial-temporal grounding at the perceptive and temporal levels, followed by cognitive level reasoning. This step-by-step pipeline mirrors human-like video comprehension and effectively identifies hallucinations. Extensive experiments demonstrate that Dr.V-Agent is effective in diagnosing hallucination while enhancing interpretability and reliability, offering a practical blueprint for robust video understanding in real-world scenarios. All our data and code are available at https://github.com/Eurekaleo/Dr.V.
Similar Papers
Diving into Mitigating Hallucinations from a Vision Perspective for Large Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when describing pictures.
Alternating Perception-Reasoning for Hallucination-Resistant Video Understanding
CV and Pattern Recognition
Teaches computers to watch videos better, without making things up.
DIQ-H: Evaluating Hallucination Persistence in VLMs Under Temporal Visual Degradation
CV and Pattern Recognition
Helps self-driving cars see clearly in bad weather.