Score: 1

Memory-efficient Streaming VideoLLMs for Real-time Procedural Video Understanding

Published: April 10, 2025 | arXiv ID: 2504.13915v1

By: Dibyadip Chatterjee , Edoardo Remelli , Yale Song and more

Potential Business Impact:

Lets computers understand long videos quickly.

Business Areas:
Video Streaming Content and Publishing, Media and Entertainment, Video

We introduce ProVideLLM, an end-to-end framework for real-time procedural video understanding. ProVideLLM integrates a multimodal cache configured to store two types of tokens - verbalized text tokens, which provide compressed textual summaries of long-term observations, and visual tokens, encoded with DETR-QFormer to capture fine-grained details from short-term observations. This design reduces token count by 22x over existing methods in representing one hour of long-term observations while effectively encoding fine-granularity of the present. By interleaving these tokens in our multimodal cache, ProVideLLM ensures sub-linear scaling of memory and compute with video length, enabling per-frame streaming inference at 10 FPS and streaming dialogue at 25 FPS, with a minimal 2GB GPU memory footprint. ProVideLLM also sets new state-of-the-art results on six procedural tasks across four datasets.

Page Count
13 pages

Category
Computer Science:
CV and Pattern Recognition