Score: 1

See Different, Think Better: Visual Variations Mitigating Hallucinations in LVLMs

Published: July 29, 2025 | arXiv ID: 2507.22003v1

By: Ziyun Dai , Xiaoqiang Li , Shaohua Zhang and more

Potential Business Impact:

Makes AI understand pictures better, stops fake answers.

Business Areas:
Visual Search Internet Services

Large Vision-Language Models (LVLMs) have demonstrated remarkable capabilities in visual understanding and multimodal reasoning. However, LVLMs frequently exhibit hallucination phenomena, manifesting as the generated textual responses that demonstrate inconsistencies with the provided visual content. Existing hallucination mitigation methods are predominantly text-centric, the challenges of visual-semantic alignment significantly limit their effectiveness, especially when confronted with fine-grained visual understanding scenarios. To this end, this paper presents ViHallu, a Vision-Centric Hallucination mitigation framework that enhances visual-semantic alignment through Visual Variation Image Generation and Visual Instruction Construction. ViHallu introduces \textbf{\textit{visual variation images}} with controllable visual alterations while maintaining the overall image structure. These images, combined with carefully constructed visual instructions, enable LVLMs to better understand fine-grained visual content through fine-tuning, allowing models to more precisely capture the correspondence between visual content and text, thereby enhancing visual-semantic alignment. Extensive experiments on multiple benchmarks show that ViHallu effectively enhances models' fine-grained visual understanding while significantly reducing hallucination tendencies. Furthermore, we release ViHallu-Instruction, a visual instruction dataset specifically designed for hallucination mitigation and visual-semantic alignment. Code is available at https://github.com/oliviadzy/ViHallu.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition