Toward More Reliable Artificial Intelligence: Reducing Hallucinations in Vision-Language Models
By: Kassoum Sanogo, Renzo Ardiccioni
Potential Business Impact:
Fixes AI mistakes when describing pictures.
Vision-language models (VLMs) frequently generate hallucinated content plausible but incorrect claims about image content. We propose a training-free self-correction framework enabling VLMs to iteratively refine responses through uncertainty-guided visual re-attention. Our method combines multidimensional uncertainty quantification (token entropy, attention dispersion, semantic consistency, claim confidence) with attention-guided cropping of under-explored regions. Operating entirely with frozen, pretrained VLMs, our framework requires no gradient updates. We validate our approach on the POPE and MMHAL BENCH benchmarks using the Qwen2.5-VL-7B [23] architecture. Experimental results demonstrate that our method reduces hallucination rates by 9.8 percentage points compared to the baseline, while improving object existence accuracy by 4.7 points on adversarial splits. Furthermore, qualitative analysis confirms that uncertainty-guided re-attention successfully grounds corrections in visual evidence where standard decoding fails. We validate our approach on Qwen2.5-VL-7B [23], with plans to extend validation across diverse architectures in future versions. We release our code and methodology to facilitate future research in trustworthy multimodal systems.
Similar Papers
Conscious Gaze: Adaptive Attention Mechanisms for Hallucination Mitigation in Vision-Language Models
CV and Pattern Recognition
Makes AI see better, not just guess words.
Mitigating Image Captioning Hallucinations in Vision-Language Models
Multimedia
Fixes AI mistakes when it sees and talks.
Causally-Grounded Dual-Path Attention Intervention for Object Hallucination Mitigation in LVLMs
CV and Pattern Recognition
Fixes AI's fake image descriptions.