Score: 1

DIQ-H: Evaluating Hallucination Persistence in VLMs Under Temporal Visual Degradation

Published: December 3, 2025 | arXiv ID: 2512.03992v1

By: Zexin Lin , Hawen Wan , Yebin Zhong and more

Potential Business Impact:

Helps self-driving cars see clearly in bad weather.

Business Areas:
Visual Search Internet Services

Vision-Language Models (VLMs) deployed in safety-critical applications such as autonomous driving must handle continuous visual streams under imperfect conditions. However, existing benchmarks focus on static, high-quality images and ignore temporal degradation and error propagation, which are critical failure modes where transient visual corruption induces hallucinations that persist across subsequent frames. We introduce DIQ-H, the first benchmark for evaluating VLM robustness under dynamic visual degradation in temporal sequences. DIQ-H applies physics-based corruptions including motion blur, sensor noise, and compression artifacts, and measures hallucination persistence, error recovery, and temporal consistency through multi-turn question-answering tasks. To enable scalable annotation, we propose Uncertainty-Guided Iterative Refinement (UIR), which generates reliable pseudo-ground-truth using lightweight VLMs with uncertainty filtering, achieving a 15.3 percent accuracy improvement. Experiments on 16 state-of-the-art VLMs reveal substantial robustness gaps: even advanced models such as GPT-4o achieve only a 78.5 percent recovery rate, while open-source models struggle with temporal consistency at less than 60 percent. DIQ-H provides a comprehensive platform for evaluating VLM reliability in real-world deployments.

Country of Origin
🇭🇰 Hong Kong

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition