Memory-efficient Streaming VideoLLMs for Real-time Procedural Video Understanding
By: Dibyadip Chatterjee , Edoardo Remelli , Yale Song and more
Potential Business Impact:
Lets computers understand long videos quickly.
We introduce ProVideLLM, an end-to-end framework for real-time procedural video understanding. ProVideLLM integrates a multimodal cache configured to store two types of tokens - verbalized text tokens, which provide compressed textual summaries of long-term observations, and visual tokens, encoded with DETR-QFormer to capture fine-grained details from short-term observations. This design reduces token count by 22x over existing methods in representing one hour of long-term observations while effectively encoding fine-granularity of the present. By interleaving these tokens in our multimodal cache, ProVideLLM ensures sub-linear scaling of memory and compute with video length, enabling per-frame streaming inference at 10 FPS and streaming dialogue at 25 FPS, with a minimal 2GB GPU memory footprint. ProVideLLM also sets new state-of-the-art results on six procedural tasks across four datasets.
Similar Papers
StreamingVLM: Real-Time Understanding for Infinite Video Streams
CV and Pattern Recognition
Lets computers watch long videos in real-time.
video-SALMONN S: Streaming Audio-Visual LLMs Beyond Length Limits via Memory
CV and Pattern Recognition
Lets AI watch hours of video at once.
VideoMem: Enhancing Ultra-Long Video Understanding via Adaptive Memory Management
CV and Pattern Recognition
Lets computers watch and remember long videos.