AVAM: Universal Training-free Adaptive Visual Anchoring Embedded into Multimodal Large Language Model for Multi-image Question Answering
By: Kang Zeng , Guojin Zhong , Jintao Cheng and more
Potential Business Impact:
Helps computers understand many pictures better.
The advancement of Multimodal Large Language Models (MLLMs) has driven significant progress in Visual Question Answering (VQA), evolving from Single to Multi Image VQA (MVQA). However, the increased number of images in MVQA inevitably introduces substantial visual redundancy that is irrelevant to question answering, negatively impacting both accuracy and efficiency. To address this issue, existing methods lack flexibility in controlling the number of compressed visual tokens and tend to produce discrete visual fragments, which hinder MLLMs' ability to comprehend images holistically. In this paper, we propose a straightforward yet universal Adaptive Visual Anchoring strategy, which can be seamlessly integrated into existing MLLMs, offering significant accuracy improvements through adaptive compression. Meanwhile, to balance the results derived from both global and compressed visual input, we further introduce a novel collaborative decoding mechanism, enabling optimal performance. Extensive experiments validate the effectiveness of our method, demonstrating consistent performance improvements across various MLLMs. The code will be publicly available.
Similar Papers
Marten: Visual Question Answering with Mask Generation for Multi-modal Document Understanding
CV and Pattern Recognition
Helps computers understand pictures and words together.
Toward Ambulatory Vision: Learning Visually-Grounded Active View Selection
CV and Pattern Recognition
Helps robots see better by moving around.
Unexplored flaws in multiple-choice VQA evaluations
CV and Pattern Recognition
Makes AI answers change just by changing the question's words.