Score: 0

Improving VQA Reliability: A Dual-Assessment Approach with Self-Reflection and Cross-Model Verification

Published: December 16, 2025 | arXiv ID: 2512.14770v1

By: Xixian Wu , Yang Ou , Pengchao Tian and more

Potential Business Impact:

Makes AI answers more truthful and trustworthy.

Business Areas:
Image Recognition Data and Analytics, Software

Vision-language models (VLMs) have demonstrated significant potential in Visual Question Answering (VQA). However, the susceptibility of VLMs to hallucinations can lead to overconfident yet incorrect answers, severely undermining answer reliability. To address this, we propose Dual-Assessment for VLM Reliability (DAVR), a novel framework that integrates Self-Reflection and Cross-Model Verification for comprehensive uncertainty estimation. The DAVR framework features a dual-pathway architecture: one pathway leverages dual selector modules to assess response reliability by fusing VLM latent features with QA embeddings, while the other deploys external reference models for factual cross-checking to mitigate hallucinations. Evaluated in the Reliable VQA Challenge at ICCV-CLVL 2025, DAVR achieves a leading $Φ_{100}$ score of 39.64 and a 100-AUC of 97.22, securing first place and demonstrating its effectiveness in enhancing the trustworthiness of VLM responses.

Page Count
4 pages

Category
Computer Science:
CV and Pattern Recognition