Score: 2

StreamMem: Query-Agnostic KV Cache Memory for Streaming Video Understanding

Published: August 21, 2025 | arXiv ID: 2508.15717v1

By: Yanlai Yang , Zhuokai Zhao , Satya Narayan Shukla and more

BigTech Affiliations: Meta

Potential Business Impact:

Lets computers watch long videos and answer questions.

Business Areas:
Video Streaming Content and Publishing, Media and Entertainment, Video

Multimodal large language models (MLLMs) have made significant progress in visual-language reasoning, but their ability to efficiently handle long videos remains limited. Despite recent advances in long-context MLLMs, storing and attending to the key-value (KV) cache for long visual contexts incurs substantial memory and computational overhead. Existing visual compression methods require either encoding the entire visual context before compression or having access to the questions in advance, which is impractical for long video understanding and multi-turn conversational settings. In this work, we propose StreamMem, a query-agnostic KV cache memory mechanism for streaming video understanding. Specifically, StreamMem encodes new video frames in a streaming manner, compressing the KV cache using attention scores between visual tokens and generic query tokens, while maintaining a fixed-size KV memory to enable efficient question answering (QA) in memory-constrained, long-video scenarios. Evaluation on three long video understanding and two streaming video question answering benchmarks shows that StreamMem achieves state-of-the-art performance in query-agnostic KV cache compression and is competitive with query-aware compression approaches.

Country of Origin
🇺🇸 United States

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition