AdaTok: Adaptive Token Compression with Object-Aware Representations for Efficient Multimodal LLMs
By: Xinliang Zhang , Lei Zhu , Hangzhou He and more
Potential Business Impact:
Makes AI understand pictures using fewer computer steps.
Multimodal Large Language Models (MLLMs) have demonstrated substantial value in unified text-image understanding and reasoning, primarily by converting images into sequences of patch-level tokens that align with their architectural paradigm. However, patch-level tokenization leads to a quadratic growth in image tokens, burdening MLLMs' understanding and reasoning with enormous computation and memory. Additionally, the traditional patch-wise scanning tokenization workflow misaligns with the human vision cognition system, further leading to hallucination and computational redundancy. To address this issue, we propose an object-level token merging strategy for Adaptive Token compression, revealing the consistency with human vision system. The experiments are conducted on multiple comprehensive benchmarks, which show that our approach averagely, utilizes only 10% tokens while achieving almost 96% of the vanilla model's performance. More extensive experimental results in comparison with relevant works demonstrate the superiority of our method in balancing compression ratio and performance. Our code will be available.
Similar Papers
HybridToken-VLM: Hybrid Token Compression for Vision-Language Models
CV and Pattern Recognition
Lets computers understand pictures better, faster.
Towards Lossless Ultimate Vision Token Compression for VLMs
CV and Pattern Recognition
Makes AI understand pictures much faster.
CORE: Compact Object-centric REpresentations as a New Paradigm for Token Merging in LVLMs
CV and Pattern Recognition
Makes AI understand pictures using less computer power.