Conscious Gaze: Adaptive Attention Mechanisms for Hallucination Mitigation in Vision-Language Models
By: Weijue Bu, Guan Yuan, Guixian Zhang
Large Vision-Language Models (VLMs) often exhibit text inertia, where attention drifts from visual evidence toward linguistic priors, resulting in object hallucinations. Existing decoding strategies intervene only at the output logits and thus cannot correct internal reasoning drift, while recent internal-control methods based on heuristic head suppression or global steering vectors lack principled grounding. We introduce Conscious Gaze (CG-VLM), a training-free, inference-time framework that converts game-theoretic interpretability into actionable decoding control. A Cognitive Demand Sensor built on Harsanyi interactions estimates instantaneous vision-text synergy and identifies moments when visual grounding is necessary. Conditioned on this signal, a Focused Consensus Induction module selectively reorients mid-layer attention toward visual tokens before collapse into text priors. CG-VLM achieves state-of-the-art results on POPE and CHAIR across InstructBLIP, LLaVA, Qwen-VL, and mPLUG, while preserving general capabilities, demonstrating that token-level sensing enables precise, context-aware intervention without compromising foundational knowledge.
Similar Papers
Causally-Grounded Dual-Path Attention Intervention for Object Hallucination Mitigation in LVLMs
CV and Pattern Recognition
Fixes AI's fake image descriptions.
Eye Gaze Tells You Where to Compute: Gaze-Driven Efficient VLMs
CV and Pattern Recognition
Makes smart glasses understand things faster.
GazeVLM: A Vision-Language Model for Multi-Task Gaze Understanding
CV and Pattern Recognition
Helps computers understand where people are looking.