Language-Guided Temporal Token Pruning for Efficient VideoLLM Processing
By: Yogesh Kumar
Potential Business Impact:
Lets computers watch long videos faster.
Vision Language Models (VLMs) struggle with long-form videos due to the quadratic complexity of attention mechanisms. We propose Language-Guided Temporal Token Pruning (LGTTP), which leverages temporal cues from queries to adaptively prune video tokens, preserving contextual continuity while reducing computational overhead. Unlike uniform pruning or keyframe selection, LGTTP retains higher token density in temporally relevant segments. Our model-agnostic framework integrates with TimeChat and LLaVA-Video, achieving a 65% reduction in computation while preserving 97-99% of the original performance. On QVHighlights, LGTTP improves HIT@1 by +9.5%, and on Charades-STA, it retains 99.6% of R@1. It excels on queries with explicit temporal markers and remains effective across general video understanding tasks.
Similar Papers
Recurrent Attention-based Token Selection for Efficient Streaming Video-LLMs
CV and Pattern Recognition
Lets computers understand long videos faster.
Aligning Effective Tokens with Video Anomaly in Large Language Models
CV and Pattern Recognition
Finds strange things happening in videos.
Efficient Video Sampling: Pruning Temporally Redundant Tokens for Faster VLM Inference
CV and Pattern Recognition
Makes videos understandable for computers faster.