Recurrent Attention-based Token Selection for Efficient Streaming Video-LLMs
By: Vaggelis Dorovatas , Soroush Seifi , Gunshi Gupta and more
Potential Business Impact:
Lets computers understand long videos faster.
Video Large Language Models (Video-LLMs) excel at understanding videos in-context, provided they have full access to the video when answering queries. However, these models face challenges in streaming scenarios where hour-long videos must be processed online, and questions need timely responses. In this work, we propose a training-free approach compatible with standard Video-LLMs, leveraging three key concepts: 1) LLM-informed selection of visual tokens to identify those that the LLM has attended to and contributed to its understanding of each short clip. Our attention-based selection allows us to discard up to ~95% of unimportant visual tokens with minimal performance loss; 2) Recurrent processing of past selected tokens to generate temporally coherent understanding of each processed clip; 3) Caption-based question answering for lightweight and accurate responses. Our method achieves state-of-the-art performance on streaming video benchmarks, striking a balance between efficiency and effectiveness.
Similar Papers
Language-Guided Temporal Token Pruning for Efficient VideoLLM Processing
CV and Pattern Recognition
Lets computers watch long videos faster.
An Empirical Study on How Video-LLMs Answer Video Questions
CV and Pattern Recognition
Explains how AI understands videos to make them faster.
FOCUS: Efficient Keyframe Selection for Long Video Understanding
CV and Pattern Recognition
Lets AI understand long videos using fewer frames.