Score: 1

Watch Closely: Mitigating Object Hallucinations in Large Vision-Language Models with Disentangled Decoding

Published: December 22, 2025 | arXiv ID: 2512.19070v1

By: Ruiqi Ma , Yu Yan , Chunhong Zhang and more

Potential Business Impact:

Makes AI see and describe things correctly.

Business Areas:
Image Recognition Data and Analytics, Software

Large Vision-Language Models (LVLMs) bridge the gap between visual and linguistic modalities, demonstrating strong potential across a variety of domains. However, despite significant progress, LVLMs still suffer from severe hallucination issues in object recognition tasks. These models often fail to accurately identify certain objects, leading to text generation that appears fluent but does not correspond to the visual content, which can have serious consequences in real-world applications. Recently, several methods have been proposed to alleviate LVLM hallucinations, but most focus solely on reducing hallucinations in the language modality. To mitigate hallucinations in both the language and visual modalities, we introduce Hallucination Disentangled Decoding (HDD) method that requires no training. HDD enhances the original image by segmenting it and selecting images that augment the original, while also utilizing a blank image to eliminate language prior hallucinations in both the original and segmented images. This design not only reduces the model's dependence on language priors but also enhances its visual performance. (Code: https://github.com/rickeyhhh/Hallucination-Disentangled-Decoding)

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition