Text-Guided Layer Fusion Mitigates Hallucination in Multimodal LLMs
By: Chenchen Lin , Sanbao Su , Rachel Luo and more
Potential Business Impact:
Makes AI understand pictures better by using more image details.
Multimodal large language models (MLLMs) typically rely on a single late-layer feature from a frozen vision encoder, leaving the encoder's rich hierarchy of visual cues under-utilized. MLLMs still suffer from visually ungrounded hallucinations, often relying on language priors rather than image evidence. While many prior mitigation strategies operate on the text side, they leave the visual representation unchanged and do not exploit the rich hierarchy of features encoded across vision layers. Existing multi-layer fusion methods partially address this limitation but remain static, applying the same layer mixture regardless of the query. In this work, we introduce TGIF (Text-Guided Inter-layer Fusion), a lightweight module that treats encoder layers as depth-wise "experts" and predicts a prompt-dependent fusion of visual features. TGIF follows the principle of direct external fusion, requires no vision-encoder updates, and adds minimal overhead. Integrated into LLaVA-1.5-7B, TGIF provides consistent improvements across hallucination, OCR, and VQA benchmarks, while preserving or improving performance on ScienceQA, GQA, and MMBench. These results suggest that query-conditioned, hierarchy-aware fusion is an effective way to strengthen visual grounding and reduce hallucination in modern MLLMs.
Similar Papers
Multi-Grained Text-Guided Image Fusion for Multi-Exposure and Multi-Focus Scenarios
CV and Pattern Recognition
Makes blurry pictures sharp and clear.
When Semantics Mislead Vision: Mitigating Large Multimodal Models Hallucinations in Scene Text Spotting and Understanding
CV and Pattern Recognition
Fixes AI's mistakes reading blurry text.
A Survey of Multimodal Hallucination Evaluation and Detection
CV and Pattern Recognition
Fixes AI that makes up fake things.