Intervene-All-Paths: Unified Mitigation of LVLM Hallucinations across Alignment Formats
By: Jiaye Qian , Ge Zheng , Yuchen Zhu and more
Potential Business Impact:
Fixes AI that makes up answers when it sees pictures.
Despite their impressive performance across a wide range of tasks, Large Vision-Language Models (LVLMs) remain prone to hallucination. In this study, we propose a comprehensive intervention framework aligned with the transformer's causal architecture in LVLMs, integrating the effects of different intervention paths on hallucination. We find that hallucinations in LVLMs do not arise from a single causal path, but rather from the interplay among image-to-input-text, image-to-output-text, and text-to-text pathways. For the first time, we also find that LVLMs rely on different pathways depending on the question-answer alignment format. Building on these insights, we propose simple yet effective methods to identify and intervene on critical hallucination heads within each pathway, tailored to discriminative and generative formats. Experiments across multiple benchmarks demonstrate that our approach consistently reduces hallucinations across diverse alignment types.
Similar Papers
Diving into Mitigating Hallucinations from a Vision Perspective for Large Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when describing pictures.
Modality Bias in LVLMs: Analyzing and Mitigating Object Hallucination via Attention Lens
CV and Pattern Recognition
Fixes AI's tendency to make up objects.
Treble Counterfactual VLMs: A Causal Approach to Hallucination
CV and Pattern Recognition
Makes AI understand pictures and words better.