Towards Mitigating Hallucinations in Large Vision-Language Models by Refining Textual Embeddings
By: Aakriti Agrawal , Gouthaman KV , Rohith Aralikatti and more
Potential Business Impact:
Makes AI understand pictures and words better.
In this work, we identify an inherent bias in prevailing LVLM architectures toward the language modality, largely resulting from the common practice of simply appending visual embeddings to the input text sequence. To address this, we propose a simple yet effective method that refines textual embeddings by integrating average-pooled visual features. Our approach demonstrably improves visual grounding and significantly reduces hallucinations on established benchmarks. While average pooling offers a straightforward, robust, and efficient means of incorporating visual information, we believe that more sophisticated fusion methods could further enhance visual grounding and cross-modal alignment. Given that the primary focus of this work is to highlight the modality imbalance and its impact on hallucinations -- and to show that refining textual embeddings with visual information mitigates this issue -- we leave exploration of advanced fusion strategies for future work.
Similar Papers
Diving into Mitigating Hallucinations from a Vision Perspective for Large Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when describing pictures.
Modality Bias in LVLMs: Analyzing and Mitigating Object Hallucination via Attention Lens
CV and Pattern Recognition
Fixes AI's tendency to make up objects.
Mitigating Multimodal Hallucinations via Gradient-based Self-Reflection
CV and Pattern Recognition
Stops AI from making up fake images.