Object-Centric Vision Token Pruning for Vision Language Models
By: Guangyuan Li , Rongzhen Zhao , Jinhong Deng and more
Potential Business Impact:
Makes AI understand pictures faster and better.
In Vision Language Models (VLMs), vision tokens are quantity-heavy yet information-dispersed compared with language tokens, thus consume too much unnecessary computation. Pruning redundant vision tokens for high VLM inference efficiency has been continuously studied but all existing methods resort to indirect and non-guaranteed ways. We propose OC-VTP, a direct and guaranteed approach to select the most representative vision tokens for high-efficiency yet accuracy-preserving VLM inference. Our OC-VTP requires merely light-weight pre-training of a small object-centric vision token pruner, which can then be inserted into existing VLMs, without fine-tuning of any models on any datasets. It is gauranteed that the most representative vision tokens are kept by minimizing the error in reconstructing the original unpruned tokens from the selected ones. Across any vision pruning ratios, i.e., inference efficiency, our OC-VTP consistently helps mainstream VLMs to preserve the highest inference accuracy. Our pruning also demonstrates interesting interpretability. Our codes are available at https://github.com/GarryLarry010131/OC-VTP.
Similar Papers
VLM-Pruner: Buffering for Spatial Sparsity in an Efficient VLM Centrifugal Token Pruning Paradigm
CV and Pattern Recognition
Makes AI understand pictures faster on phones.
AdaptVision: Efficient Vision-Language Models via Adaptive Visual Acquisition
CV and Pattern Recognition
Lets computers see smarter, using less data.
GreedyPrune: Retenting Critical Visual Token Set for Large Vision Language Models
CV and Pattern Recognition
Makes AI understand pictures faster and cheaper.