Score: 0

Multi-Granular Spatio-Temporal Token Merging for Training-Free Acceleration of Video LLMs

Published: July 10, 2025 | arXiv ID: 2507.07990v1

By: Jeongseok Hyun , Sukjun Hwang , Su Ho Han and more

Potential Business Impact:

Makes videos understandable with less computer power.

Business Areas:
Text Analytics Data and Analytics, Software

Video large language models (LLMs) achieve strong video understanding by leveraging a large number of spatio-temporal tokens, but suffer from quadratic computational scaling with token count. To address this, we propose a training-free spatio-temporal token merging method, named STTM. Our key insight is to exploit local spatial and temporal redundancy in video data which has been overlooked in prior work. STTM first transforms each frame into multi-granular spatial tokens using a coarse-to-fine search over a quadtree structure, then performs directed pairwise merging across the temporal dimension. This decomposed merging approach outperforms existing token reduction methods across six video QA benchmarks. Notably, STTM achieves a 2$\times$ speed-up with only a 0.5% accuracy drop under a 50% token budget, and a 3$\times$ speed-up with just a 2% drop under a 30% budget. Moreover, STTM is query-agnostic, allowing KV cache reuse across different questions for the same video. The project page is available at https://www.jshyun.me/projects/sttm.

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition