Score: 3

Object-Centric Vision Token Pruning for Vision Language Models

Published: November 25, 2025 | arXiv ID: 2511.20439v1

By: Guangyuan Li , Rongzhen Zhao , Jinhong Deng and more

Potential Business Impact:

Makes AI understand pictures faster and better.

Business Areas:
Image Recognition Data and Analytics, Software

In Vision Language Models (VLMs), vision tokens are quantity-heavy yet information-dispersed compared with language tokens, thus consume too much unnecessary computation. Pruning redundant vision tokens for high VLM inference efficiency has been continuously studied but all existing methods resort to indirect and non-guaranteed ways. We propose OC-VTP, a direct and guaranteed approach to select the most representative vision tokens for high-efficiency yet accuracy-preserving VLM inference. Our OC-VTP requires merely light-weight pre-training of a small object-centric vision token pruner, which can then be inserted into existing VLMs, without fine-tuning of any models on any datasets. It is gauranteed that the most representative vision tokens are kept by minimizing the error in reconstructing the original unpruned tokens from the selected ones. Across any vision pruning ratios, i.e., inference efficiency, our OC-VTP consistently helps mainstream VLMs to preserve the highest inference accuracy. Our pruning also demonstrates interesting interpretability. Our codes are available at https://github.com/GarryLarry010131/OC-VTP.

Country of Origin
🇳🇱 🇫🇮 Netherlands, Finland

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition