Vision-Centric Activation and Coordination for Multimodal Large Language Models
By: Yunnan Wang , Fan Lu , Kecheng Zheng and more
Potential Business Impact:
Helps computers understand pictures better.
Multimodal large language models (MLLMs) integrate image features from visual encoders with LLMs, demonstrating advanced comprehension capabilities. However, mainstream MLLMs are solely supervised by the next-token prediction of textual tokens, neglecting critical vision-centric information essential for analytical abilities. To track this dilemma, we introduce VaCo, which optimizes MLLM representations through Vision-Centric activation and Coordination from multiple vision foundation models (VFMs). VaCo introduces visual discriminative alignment to integrate task-aware perceptual features extracted from VFMs, thereby unifying the optimization of both textual and visual outputs in MLLMs. Specifically, we incorporate the learnable Modular Task Queries (MTQs) and Visual Alignment Layers (VALs) into MLLMs, activating specific visual signals under the supervision of diverse VFMs. To coordinate representation conflicts across VFMs, the crafted Token Gateway Mask (TGM) restricts the information flow among multiple groups of MTQs. Extensive experiments demonstrate that VaCo significantly improves the performance of different MLLMs on various benchmarks, showcasing its superior capabilities in visual comprehension.
Similar Papers
Vision-Centric Activation and Coordination for Multimodal Large Language Models
CV and Pattern Recognition
Helps computers understand pictures better.
CoCoVa: Chain of Continuous Vision-Language Thought for Latent Space Reasoning
CV and Pattern Recognition
Helps computers understand pictures like people do.
Rethinking Visual Information Processing in Multimodal LLMs
CV and Pattern Recognition
Lets computers understand pictures and words together better.