Instruction-Aligned Visual Attention for Mitigating Hallucinations in Large Vision-Language Models
By: Bin Li , Dehong Gao , Yeyuan Wang and more
Potential Business Impact:
Makes AI describe pictures without making things up.
Despite the significant success of Large Vision-Language models(LVLMs), these models still suffer hallucinations when describing images, generating answers that include non-existent objects. It is reported that these models tend to over-focus on certain irrelevant image tokens that do not contain critical information for answering the question and distort the output. To address this, we propose an Instruction-Aligned Visual Attention(IAVA) approach, which identifies irrelevant tokens by comparing changes in attention weights under two different instructions. By applying contrastive decoding, we dynamically adjust the logits generated from original image tokens and irrelevant image tokens, reducing the model's over-attention to irrelevant information. The experimental results demonstrate that IAVA consistently outperforms existing decoding techniques on benchmarks such as MME, POPE, and TextVQA in mitigating object hallucinations. Our IAVA approach is available online at https://github.com/Lee-lab558/IAVA.
Similar Papers
Attention Hijackers: Detect and Disentangle Attention Hijacking in LVLMs for Hallucination Mitigation
CV and Pattern Recognition
Stops AI from making up fake details about pictures.
A Comprehensive Analysis for Visual Object Hallucination in Large Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when it sees and talks.
VEGAS: Mitigating Hallucinations in Large Vision-Language Models via Vision-Encoder Attention Guided Adaptive Steering
CV and Pattern Recognition
Makes AI see pictures better, less mistakes.