Score: 1

Index-Preserving Lightweight Token Pruning for Efficient Document Understanding in Vision-Language Models

Published: September 8, 2025 | arXiv ID: 2509.06415v1

By: Jaemin Son, Sujin Choi, Inyong Yun

Potential Business Impact:

Makes AI understand papers faster and cheaper.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent progress in vision-language models (VLMs) has led to impressive results in document understanding tasks, but their high computational demands remain a challenge. To mitigate the compute burdens, we propose a lightweight token pruning framework that filters out non-informative background regions from document images prior to VLM processing. A binary patch-level classifier removes non-text areas, and a max-pooling refinement step recovers fragmented text regions to enhance spatial coherence. Experiments on real-world document datasets demonstrate that our approach substantially lowers computational costs, while maintaining comparable accuracy.

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition