Score: 2

SAVER: Mitigating Hallucinations in Large Vision-Language Models via Style-Aware Visual Early Revision

Published: August 5, 2025 | arXiv ID: 2508.03177v1

By: Zhaoxu Li , Chenqi Kong , Yi Yu and more

Potential Business Impact:

Fixes AI mistakes in pictures and words.

Large Vision-Language Models (LVLMs) recently achieve significant breakthroughs in understanding complex visual-textual contexts. However, hallucination issues still limit their real-world applicability. Although previous mitigation methods effectively reduce hallucinations in photographic images, they largely overlook the potential risks posed by stylized images, which play crucial roles in critical scenarios such as game scene understanding, art education, and medical analysis. In this work, we first construct a dataset comprising photographic images and their corresponding stylized versions with carefully annotated caption labels. We then conduct head-to-head comparisons on both discriminative and generative tasks by benchmarking 13 advanced LVLMs on the collected datasets. Our findings reveal that stylized images tend to induce significantly more hallucinations than their photographic counterparts. To address this issue, we propose Style-Aware Visual Early Revision SAVER, a novel mechanism that dynamically adjusts LVLMs' final outputs based on the token-level visual attention patterns, leveraging early-layer feedback to mitigate hallucinations caused by stylized images. Extensive experiments demonstrate that SAVER achieves state-of-the-art performance in hallucination mitigation across various models, datasets, and tasks.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ πŸ‡­πŸ‡° Singapore, Hong Kong, China

Page Count
24 pages

Category
Computer Science:
CV and Pattern Recognition