Visual Multi-Agent System: Mitigating Hallucination Snowballing via Visual Flow
By: Xinlei Yu , Chengming Xu , Guibin Zhang and more
Potential Business Impact:
Fixes AI mistakes when talking about pictures.
Multi-Agent System (MAS) powered by Visual Language Models (VLMs) enables challenging tasks but suffers from a novel failure term, multi-agent visual hallucination snowballing, where hallucinations are seeded in a single agent and amplified by following ones due to the over-reliance on textual flow to relay visual information. Through turn-, layer-, and token-wise attention analyses, we provide detailed insights into the essence of hallucination snowballing regarding the reduction of visual attention allocation. It leads us to identify a subset of vision tokens with a unimodal attention peak in middle layers that best preserve visual evidence but gradually diminish in deeper agent turns, resulting in the visual hallucination snowballing in MAS. Thus, we propose ViF, a lightweight, plug-and-play mitigation paradigm that relays inter-agent messages with Visual Flow powered by the selected visual relay tokens and applies attention reallocation to amplify this pattern. The experiment results demonstrate that our method markedly reduces hallucination snowballing, consistently improving the performance across eight benchmarks based on four common MAS structures and ten base models. The source code will be available at: https://github.com/YU-deep/ViF.git.
Similar Papers
Not All Tokens and Heads Are Equally Important: Dual-Level Attention Intervention for Hallucination Mitigation
CV and Pattern Recognition
Fixes AI's mistakes when it describes pictures.
Toward More Reliable Artificial Intelligence: Reducing Hallucinations in Vision-Language Models
CV and Pattern Recognition
Fixes AI mistakes when describing pictures.
Enhancing Agentic Autonomous Scientific Discovery with Vision-Language Model Capabilities
Computation and Language
Computers discover science by checking their own work.