Mitigating Multimodal Hallucinations via Gradient-based Self-Reflection
By: Shan Wang , Maying Shen , Nadine Chang and more
Potential Business Impact:
Stops AI from making up fake images.
Hallucinations in multimodal large language model are caused by the text-visual bias and the co-occurrence bias. The former reflects an over-reliance on text information in the decision-making process, while the latter arises from the statistical object-pairing patterns abstracted from the training data. Existing mitigation methods heuristically address these biases without understanding the fluctuating bias level across the instances. We first propose estimating the influence of respective token types (visual, prompt, and previous outputs) using a gradient-based self-reflection method. The estimated token influence further enables the detection of object-related visual tokens and their integration into an influence-aware contrastive decoding framework to mitigate both types of biases simultaneously. Our method operates without the need for additional resources, such as costly fine-tuning, extra models, or data statistics. Extensive experiments show it effectively reduces hallucinations, achieving up to a 92% accuracy increase on LLaVA-QA90.
Similar Papers
Modality Bias in LVLMs: Analyzing and Mitigating Object Hallucination via Attention Lens
CV and Pattern Recognition
Fixes AI's tendency to make up objects.
Towards Mitigating Hallucinations in Large Vision-Language Models by Refining Textual Embeddings
CV and Pattern Recognition
Makes AI understand pictures and words better.
Mitigating Hallucinations in Multimodal LLMs via Object-aware Preference Optimization
CV and Pattern Recognition
Makes AI stop making up fake answers.