Score: 1

DynTok: Dynamic Compression of Visual Tokens for Efficient and Effective Video Understanding

Published: June 4, 2025 | arXiv ID: 2506.03990v1

By: Hongzhi Zhang , Jingyuan Zhang , Xingguang Ji and more

BigTech Affiliations: Kuaishou

Potential Business Impact:

Makes videos understandable with fewer computer steps.

Business Areas:
Video Editing Content and Publishing, Media and Entertainment, Video

Typical video modeling methods, such as LLava, represent videos as sequences of visual tokens, which are then processed by the LLM backbone for effective video understanding. However, this approach leads to a massive number of visual tokens, especially for long videos. A practical solution is to first extract relevant visual information from the large visual context before feeding it into the LLM backbone, thereby reducing computational overhead. In this work, we introduce DynTok, a novel \textbf{Dyn}amic video \textbf{Tok}en compression strategy. DynTok adaptively splits visual tokens into groups and merges them within each group, achieving high compression in regions with low information density while preserving essential content. Our method reduces the number of tokens to 44.4% of the original size while maintaining comparable performance. It further benefits from increasing the number of video frames and achieves 65.3% on Video-MME and 72.5% on MLVU. By applying this simple yet effective compression method, we expose the redundancy in video token representations and offer insights for designing more efficient video modeling techniques.

Country of Origin
🇨🇳 China

Page Count
11 pages

Category
Computer Science:
Computation and Language