EVTP-IVS: Effective Visual Token Pruning For Unifying Instruction Visual Segmentation In Multi-Modal Large Language Models
By: Wenhui Zhu , Xiwen Chen , Zhipeng Wang and more
Potential Business Impact:
Makes AI understand pictures faster by picking key parts.
Instructed Visual Segmentation (IVS) tasks require segmenting objects in images or videos based on natural language instructions. While recent multimodal large language models (MLLMs) have achieved strong performance on IVS, their inference cost remains a major bottleneck, particularly in video. We empirically analyze visual token sampling in MLLMs and observe a strong correlation between subset token coverage and segmentation performance. This motivates our design of a simple and effective token pruning method that selects a compact yet spatially representative subset of tokens to accelerate inference. In this paper, we introduce a novel visual token pruning method for IVS, called EVTP-IV, which builds upon the k-center by integrating spatial information to ensure better coverage. We further provide an information-theoretic analysis to support our design. Experiments on standard IVS benchmarks show that our method achieves up to 5X speed-up on video tasks and 3.5X on image tasks, while maintaining comparable accuracy using only 20% of the tokens. Our method also consistently outperforms state-of-the-art pruning baselines under varying pruning ratios.
Similar Papers
Efficient Video Sampling: Pruning Temporally Redundant Tokens for Faster VLM Inference
CV and Pattern Recognition
Makes videos understandable for computers faster.
Back to Fundamentals: Low-Level Visual Features Guided Progressive Token Pruning
CV and Pattern Recognition
Makes AI see details with less computer power.
Object-Centric Vision Token Pruning for Vision Language Models
CV and Pattern Recognition
Makes AI understand pictures faster and better.