Multi-Granular Spatio-Temporal Token Merging for Training-Free Acceleration of Video LLMs
By: Jeongseok Hyun , Sukjun Hwang , Su Ho Han and more
Potential Business Impact:
Makes videos understandable with less computer power.
Video large language models (LLMs) achieve strong video understanding by leveraging a large number of spatio-temporal tokens, but suffer from quadratic computational scaling with token count. To address this, we propose a training-free spatio-temporal token merging method, named STTM. Our key insight is to exploit local spatial and temporal redundancy in video data which has been overlooked in prior work. STTM first transforms each frame into multi-granular spatial tokens using a coarse-to-fine search over a quadtree structure, then performs directed pairwise merging across the temporal dimension. This decomposed merging approach outperforms existing token reduction methods across six video QA benchmarks. Notably, STTM achieves a 2$\times$ speed-up with only a 0.5% accuracy drop under a 50% token budget, and a 3$\times$ speed-up with just a 2% drop under a 30% budget. Moreover, STTM is query-agnostic, allowing KV cache reuse across different questions for the same video. The project page is available at https://www.jshyun.me/projects/sttm.
Similar Papers
HTTM: Head-wise Temporal Token Merging for Faster VGGT
CV and Pattern Recognition
Makes 3D scene building much faster.
SpaceVLLM: Endowing Multimodal Large Language Model with Spatio-Temporal Video Grounding Capability
CV and Pattern Recognition
Helps computers find objects in videos.
Video, How Do Your Tokens Merge?
CV and Pattern Recognition
Makes videos play faster without losing quality.