Score: 1

Intervene-All-Paths: Unified Mitigation of LVLM Hallucinations across Alignment Formats

Published: November 21, 2025 | arXiv ID: 2511.17254v1

By: Jiaye Qian , Ge Zheng , Yuchen Zhu and more

Potential Business Impact:

Fixes AI that makes up answers when it sees pictures.

Business Areas:
Computer Vision Hardware, Software

Despite their impressive performance across a wide range of tasks, Large Vision-Language Models (LVLMs) remain prone to hallucination. In this study, we propose a comprehensive intervention framework aligned with the transformer's causal architecture in LVLMs, integrating the effects of different intervention paths on hallucination. We find that hallucinations in LVLMs do not arise from a single causal path, but rather from the interplay among image-to-input-text, image-to-output-text, and text-to-text pathways. For the first time, we also find that LVLMs rely on different pathways depending on the question-answer alignment format. Building on these insights, we propose simple yet effective methods to identify and intervene on critical hallucination heads within each pathway, tailored to discriminative and generative formats. Experiments across multiple benchmarks demonstrate that our approach consistently reduces hallucinations across diverse alignment types.

Repos / Data Links

Page Count
24 pages

Category
Computer Science:
CV and Pattern Recognition