StreamingAssistant: Efficient Visual Token Pruning for Accelerating Online Video Understanding
By: Xinqi Jin , Hanxun Yu , Bohan Yu and more
Potential Business Impact:
Makes videos understandable for computers faster.
Online video understanding is essential for applications like public surveillance and AI glasses. However, applying Multimodal Large Language Models (MLLMs) to this domain is challenging due to the large number of video frames, resulting in high GPU memory usage and computational latency. To address these challenges, we propose token pruning as a means to reduce context length while retaining critical information. Specifically, we introduce a novel redundancy metric, Maximum Similarity to Spatially Adjacent Video Tokens (MSSAVT), which accounts for both token similarity and spatial position. To mitigate the bidirectional dependency between pruning and redundancy, we further design a masked pruning strategy that ensures only mutually unadjacent tokens are pruned. We also integrate an existing temporal redundancy-based pruning method to eliminate temporal redundancy of the video modality. Experimental results on multiple online and offline video understanding benchmarks demonstrate that our method significantly improves the accuracy (i.e., by 4\% at most) while incurring a negligible pruning latency (i.e., less than 1ms). Our full implementation will be made publicly available.
Similar Papers
Language-Guided Temporal Token Pruning for Efficient VideoLLM Processing
CV and Pattern Recognition
Lets computers watch long videos faster.
Leveraging KV Similarity for Online Structured Pruning in LLMs
Computation and Language
Makes AI models faster and smarter without extra training.
VLM-Pruner: Buffering for Spatial Sparsity in an Efficient VLM Centrifugal Token Pruning Paradigm
CV and Pattern Recognition
Makes AI understand pictures faster on phones.