SAVE: Sparse Autoencoder-Driven Visual Information Enhancement for Mitigating Object Hallucination
By: Sangha Park , Seungryong Yoo , Jisoo Mok and more
Potential Business Impact:
Stops AI from making up fake objects in pictures.
Although Multimodal Large Language Models (MLLMs) have advanced substantially, they remain vulnerable to object hallucination caused by language priors and visual information loss. To address this, we propose SAVE (Sparse Autoencoder-Driven Visual Information Enhancement), a framework that mitigates hallucination by steering the model along Sparse Autoencoder (SAE) latent features. A binary object-presence question-answering probe identifies the SAE features most indicative of the model's visual information processing, referred to as visual understanding features. Steering the model along these identified features reinforces grounded visual understanding and effectively reduces hallucination. With its simple design, SAVE outperforms state-of-the-art training-free methods on standard benchmarks, achieving a 10\%p improvement in CHAIR\_S and consistent gains on POPE and MMHal-Bench. Extensive evaluations across multiple models and layers confirm the robustness and generalizability of our approach. Further analysis reveals that steering along visual understanding features suppresses the generation of uncertain object tokens and increases attention to image tokens, mitigating hallucination. Code is released at https://github.com/wiarae/SAVE.
Similar Papers
SAFE: A Sparse Autoencoder-Based Framework for Robust Query Enrichment and Hallucination Mitigation in LLMs
Computation and Language
Stops AI from making up wrong information.
Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models
CV and Pattern Recognition
Helps AI understand pictures better, controlling its answers.
SAVER: Mitigating Hallucinations in Large Vision-Language Models via Style-Aware Visual Early Revision
CV and Pattern Recognition
Fixes AI mistakes in pictures and words.