CacheFlow: Compressive Streaming Memory for Efficient Long-Form Video Understanding
By: Shrenik Patel, Daivik Patel
Potential Business Impact:
Lets computers watch long videos and answer questions.
Long-form video question answering (VQA) overwhelms current vision-language models (VLMs) because attention and key-value (KV) caches grow with runtime, forcing either expensive inference or near-sighted sliding windows. We introduce CacheFlow, a training-free pipeline that pairs Dynamic Token Dropping (DTD) with a compressive long-term memory. DTD prunes per-patch tokens online via cosine similarity to the previous frame, and surviving tokens are packed into fixed-size blocks. This online, per-frame processing makes our approach fundamentally suited for live streaming VQA. As blocks are processed, each one's keys are summarized by a tiny recurrent encoder to form a retrieval index, while the block's full KV pairs are offloaded and later rehydrated for generation, preserving answer fidelity. At inference, a consensus-based retrieval mechanism retrieves only the Top-K most relevant blocks and attends over both the retrieved and local context for precise, long-range reasoning. CacheFlow is drop-in, architecture-agnostic, and requires no fine-tuning. Experiments on both offline and streaming VQA benchmarks demonstrate that CacheFlow outperforms current strong baselines, while processing up to 87% less tokens. Our dual approach enables VLMs to be both efficient and context-aware, paving the way for practical long-form video understanding.
Similar Papers
StreamMem: Query-Agnostic KV Cache Memory for Streaming Video Understanding
CV and Pattern Recognition
Lets computers watch long videos and answer questions.
Streaming Video Question-Answering with In-context Video KV-Cache Retrieval
CV and Pattern Recognition
Answers questions about long videos instantly.
Deep Forcing: Training-Free Long Video Generation with Deep Sink and Participative Compression
CV and Pattern Recognition
Makes videos play longer without looking weird.