Look Closer! An Adversarial Parametric Editing Framework for Hallucination Mitigation in VLMs
By: Jiayu Hu , Beibei Li , Jiangwei Xia and more
Potential Business Impact:
Makes AI see what it's looking at.
While Vision-Language Models (VLMs) have garnered increasing attention in the AI community due to their promising practical applications, they exhibit persistent hallucination issues, generating outputs misaligned with visual inputs. Recent studies attribute these hallucinations to VLMs' over-reliance on linguistic priors and insufficient visual feature integration, proposing heuristic decoding calibration strategies to mitigate them. However, the non-trainable nature of these strategies inherently limits their optimization potential. To this end, we propose an adversarial parametric editing framework for Hallucination mitigation in VLMs, which follows an \textbf{A}ctivate-\textbf{L}ocate-\textbf{E}dit \textbf{A}dversarially paradigm. Specifically, we first construct an activation dataset that comprises grounded responses (positive samples attentively anchored in visual features) and hallucinatory responses (negative samples reflecting LLM prior bias and internal knowledge artifacts). Next, we identify critical hallucination-prone parameter clusters by analyzing differential hidden states of response pairs. Then, these clusters are fine-tuned using prompts injected with adversarial tuned prefixes that are optimized to maximize visual neglect, thereby forcing the model to prioritize visual evidence over inherent parametric biases. Evaluations on both generative and discriminative VLM tasks demonstrate the significant effectiveness of ALEAHallu in alleviating hallucinations. Our code is available at https://github.com/hujiayu1223/ALEAHallu.
Similar Papers
Toward More Reliable Artificial Intelligence: Reducing Hallucinations in Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when describing pictures.
See Different, Think Better: Visual Variations Mitigating Hallucinations in LVLMs
CV and Pattern Recognition
Helps AI see pictures better, stops fake answers.
Intervene-All-Paths: Unified Mitigation of LVLM Hallucinations across Alignment Formats
CV and Pattern Recognition
Fixes AI that makes up answers when it sees pictures.