A Glimpse to Compress: Dynamic Visual Token Pruning for Large Vision-Language Models
By: Quan-Sheng Zeng , Yunheng Li , Qilong Wang and more
Potential Business Impact:
Cuts 93% image junk to keep AI sharp
Visual token compression is critical for Large Vision-Language Models (LVLMs) to efficiently process high-resolution inputs. Existing methods that typically adopt fixed compression ratios cannot adapt to scenes of varying complexity, often causing imprecise pruning that discards informative visual tokens and results in degraded model performance. To address this issue, we introduce a dynamic pruning framework, GlimpsePrune, inspired by human cognition. It takes a data-driven ''glimpse'' and prunes irrelevant visual tokens in a single forward pass before answer generation. This approach prunes 92.6% of visual tokens while on average fully retaining the baseline performance on free-form VQA tasks. The reduced computational cost also enables more effective fine-tuning: an enhanced GlimpsePrune+ achieves 110% of the baseline performance while maintaining a similarly high pruning rate. Our work paves a new way for building more powerful and efficient LVLMs.
Similar Papers
GreedyPrune: Retenting Critical Visual Token Set for Large Vision Language Models
CV and Pattern Recognition
Makes AI understand pictures faster and cheaper.
LVPruning: An Effective yet Simple Language-Guided Vision Token Pruning Approach for Multi-modal Large Language Models
Computation and Language
Makes smart AI see and think faster.
Can Visual Input Be Compressed? A Visual Token Compression Benchmark for Large Multimodal Models
CV and Pattern Recognition
Makes AI understand pictures faster and better.