StreamMem: Query-Agnostic KV Cache Memory for Streaming Video Understanding
By: Yanlai Yang , Zhuokai Zhao , Satya Narayan Shukla and more
Potential Business Impact:
Lets computers watch long videos and answer questions.
Multimodal large language models (MLLMs) have made significant progress in visual-language reasoning, but their ability to efficiently handle long videos remains limited. Despite recent advances in long-context MLLMs, storing and attending to the key-value (KV) cache for long visual contexts incurs substantial memory and computational overhead. Existing visual compression methods require either encoding the entire visual context before compression or having access to the questions in advance, which is impractical for long video understanding and multi-turn conversational settings. In this work, we propose StreamMem, a query-agnostic KV cache memory mechanism for streaming video understanding. Specifically, StreamMem encodes new video frames in a streaming manner, compressing the KV cache using attention scores between visual tokens and generic query tokens, while maintaining a fixed-size KV memory to enable efficient question answering (QA) in memory-constrained, long-video scenarios. Evaluation on three long video understanding and two streaming video question answering benchmarks shows that StreamMem achieves state-of-the-art performance in query-agnostic KV cache compression and is competitive with query-aware compression approaches.
Similar Papers
StreamKV: Streaming Video Question-Answering with Segment-based KV Cache Retrieval and Compression
CV and Pattern Recognition
Lets computers understand long videos better.
Streaming Video Question-Answering with In-context Video KV-Cache Retrieval
CV and Pattern Recognition
Answers questions about long videos instantly.
CacheFlow: Compressive Streaming Memory for Efficient Long-Form Video Understanding
CV and Pattern Recognition
Lets computers watch long videos and answer questions.