SDCD: Structure-Disrupted Contrastive Decoding for Mitigating Hallucinations in Large Vision-Language Models
By: Yuxuan Xia, Siheng Wang, Peng Li
Potential Business Impact:
Stops AI from making up fake objects.
Large Vision-Language Models (LVLMs) demonstrate significant progress in multimodal understanding and reasoning, yet object hallucination remains a critical challenge. While existing research focuses on mitigating language priors or high-level statistical biases, they often overlook the internal complexities of the visual encoding process. We identify that visual statistical bias, arising from the inherent Bag-of-Patches behavior of Vision Encoders under weak structural supervision, acts as a contributing factor of object hallucinations. Under this bias, models prioritize local texture features within individual patches over holistic geometric structures. This tendency may induce spurious visual confidence and result in hallucinations. To address this, we introduce a training-free algorithm called Structure-Disrupted Contrastive Decoding (SDCD), which performs contrastive calibration of the output distribution by introducing a shuffled structure-disrupted view. By penalizing tokens that maintain high confidence under this structure-less view, SDCD effectively suppresses the texture-driven bias. Experimental results demonstrate that SDCD significantly mitigates hallucinations across multiple benchmarks and enhances the overall multimodal capabilities of LVLMs.
Similar Papers
Med-VCD: Mitigating Hallucination for Medical Large Vision Language Models through Visual Contrastive Decoding
CV and Pattern Recognition
Makes AI doctors give more accurate answers.
Watch Closely: Mitigating Object Hallucinations in Large Vision-Language Models with Disentangled Decoding
CV and Pattern Recognition
Makes AI see and describe things correctly.
Decoupling Contrastive Decoding: Robust Hallucination Mitigation in Multimodal Large Language Models
Machine Learning (CS)
Stops AI from making up fake answers.