Adaptive-VoCo: Complexity-Aware Visual Token Compression for Vision-Language Models
By: Xiaoyang Guo, Keze Wang
In recent years, large-scale vision-language models (VLMs) have demonstrated remarkable performance on multimodal understanding and reasoning tasks. However, handling high-dimensional visual features often incurs substantial computational and memory costs. VoCo-LLaMA alleviates this issue by compressing visual patch tokens into a few VoCo tokens, reducing computational overhead while preserving strong cross-modal alignment. Nevertheless, such approaches typically adopt a fixed compression rate, limiting their ability to adapt to varying levels of visual complexity. To address this limitation, we propose Adaptive-VoCo, a framework that augments VoCo-LLaMA with a lightweight predictor for adaptive compression. This predictor dynamically selects an optimal compression rate by quantifying an image's visual complexity using statistical cues from the vision encoder, such as patch token entropy and attention map variance. Furthermore, we introduce a joint loss function that integrates rate regularization with complexity alignment. This enables the model to balance inference efficiency with representational capacity, particularly in challenging scenarios. Experimental results show that our method consistently outperforms fixed-rate baselines across multiple multimodal tasks, highlighting the potential of adaptive visual compression for creating more efficient and robust VLMs.
Similar Papers
Towards Lossless Ultimate Vision Token Compression for VLMs
CV and Pattern Recognition
Makes AI understand pictures much faster.
ViCO: A Training Strategy towards Semantic Aware Dynamic High-Resolution
CV and Pattern Recognition
Makes AI understand pictures using fewer computer parts.
AdaptVision: Efficient Vision-Language Models via Adaptive Visual Acquisition
CV and Pattern Recognition
Lets computers see smarter, using less data.