Improving VQA Reliability: A Dual-Assessment Approach with Self-Reflection and Cross-Model Verification
By: Xixian Wu , Yang Ou , Pengchao Tian and more
Potential Business Impact:
Makes AI answers more truthful and trustworthy.
Vision-language models (VLMs) have demonstrated significant potential in Visual Question Answering (VQA). However, the susceptibility of VLMs to hallucinations can lead to overconfident yet incorrect answers, severely undermining answer reliability. To address this, we propose Dual-Assessment for VLM Reliability (DAVR), a novel framework that integrates Self-Reflection and Cross-Model Verification for comprehensive uncertainty estimation. The DAVR framework features a dual-pathway architecture: one pathway leverages dual selector modules to assess response reliability by fusing VLM latent features with QA embeddings, while the other deploys external reference models for factual cross-checking to mitigate hallucinations. Evaluated in the Reliable VQA Challenge at ICCV-CLVL 2025, DAVR achieves a leading $Φ_{100}$ score of 39.64 and a 100-AUC of 97.22, securing first place and demonstrating its effectiveness in enhancing the trustworthiness of VLM responses.
Similar Papers
Toward More Reliable Artificial Intelligence: Reducing Hallucinations in Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when describing pictures.
VLMs Guided Interpretable Decision Making for Autonomous Driving
CV and Pattern Recognition
Helps self-driving cars make safer, clearer choices.
DIQ-H: Evaluating Hallucination Persistence in VLMs Under Temporal Visual Degradation
CV and Pattern Recognition
Helps self-driving cars see clearly in bad weather.