HEDGE: Hallucination Estimation via Dense Geometric Entropy for VQA with Vision-Language Models
By: Sushant Gautam, Michael A. Riegler, Pål Halvorsen
Potential Business Impact:
Finds when AI "sees" wrong things in pictures.
Vision-language models (VLMs) enable open-ended visual question answering but remain prone to hallucinations. We present HEDGE, a unified framework for hallucination detection that combines controlled visual perturbations, semantic clustering, and robust uncertainty metrics. HEDGE integrates sampling, distortion synthesis, clustering (entailment- and embedding-based), and metric computation into a reproducible pipeline applicable across multimodal architectures. Evaluations on VQA-RAD and KvasirVQA-x1 with three representative VLMs (LLaVA-Med, Med-Gemma, Qwen2.5-VL) reveal clear architecture- and prompt-dependent trends. Hallucination detectability is highest for unified-fusion models with dense visual tokenization (Qwen2.5-VL) and lowest for architectures with restricted tokenization (Med-Gemma). Embedding-based clustering often yields stronger separation when applied directly to the generated answers, whereas NLI-based clustering remains advantageous for LLaVA-Med and for longer, sentence-level responses. Across configurations, the VASE metric consistently provides the most robust hallucination signal, especially when paired with embedding clustering and a moderate sampling budget (n ~ 10-15). Prompt design also matters: concise, label-style outputs offer clearer semantic structure than syntactically constrained one-sentence responses. By framing hallucination detection as a geometric robustness problem shaped jointly by sampling scale, prompt structure, model architecture, and clustering strategy, HEDGE provides a principled, compute-aware foundation for evaluating multimodal reliability. The hedge-bench PyPI library enables reproducible and extensible benchmarking, with full code and experimental resources available at https://github.com/Simula/HEDGE .
Similar Papers
Vision-Amplified Semantic Entropy for Hallucination Detection in Medical Visual Question Answering
CV and Pattern Recognition
Finds fake answers in medical AI.
Geometric Uncertainty for Detecting and Correcting Hallucinations in LLMs
Computation and Language
Helps computers know when they are wrong.
Toward More Reliable Artificial Intelligence: Reducing Hallucinations in Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when describing pictures.