Learning Compact Vision Tokens for Efficient Large Multimodal Models
By: Hao Tang, Chengchao Shen
Potential Business Impact:
Makes AI understand pictures much faster.
Large multimodal models (LMMs) suffer significant computational challenges due to the high cost of Large Language Models (LLMs) and the quadratic complexity of processing long vision token sequences. In this paper, we explore the spatial redundancy among vision tokens and shorten the length of vision token sequences for inference acceleration. Specifically, we propose a Spatial Token Fusion (STF) method to learn compact vision tokens for short vision token sequence, where spatial-adjacent tokens are fused into one. Meanwhile, weight-frozen vision encoder can not well adapt to the demand of extensive downstream vision-language tasks. To this end, we further introduce a Multi-Block Token Fusion (MBTF) module to supplement multi-granularity features for the reduced token sequence. Overall, we combine STF and MBTF module to balance token reduction and information preservation, thereby improving inference efficiency without sacrificing multimodal reasoning capabilities. Experimental results demonstrate that our method based on LLaVA-1.5 achieves comparable or even superior performance to the baseline on 8 popular vision-language benchmarks with only $25\%$ vision tokens of baseline. The source code and trained weights are available at https://github.com/visresearch/LLaVA-STF.
Similar Papers
ToFu: Visual Tokens Reduction via Fusion for Multi-modal, Multi-patch, Multi-image Task
CV and Pattern Recognition
Fuses pictures to make AI understand more.
LFTR: Learning-Free Token Reduction for Multimodal Large Language Models
CV and Pattern Recognition
Makes smart computer vision faster and cheaper.
Vision-LLMs for Spatiotemporal Traffic Forecasting
Machine Learning (CS)
Predicts city traffic jams before they happen.