Event-VStream: Event-Driven Real-Time Understanding for Long Video Streams
By: Zhenghui Guo , Yuanbin Man , Junyuan Sheng and more
Potential Business Impact:
Lets computers understand long videos without forgetting.
Real-time understanding of long video streams remains challenging for multimodal large language models (VLMs) due to redundant frame processing and rapid forgetting of past context. Existing streaming systems rely on fixed-interval decoding or cache pruning, which either produce repetitive outputs or discard crucial temporal information. We introduce Event-VStream, an event-aware framework that represents continuous video as a sequence of discrete, semantically coherent events. Our system detects meaningful state transitions by integrating motion, semantic, and predictive cues, and triggers language generation only at those boundaries. Each event embedding is consolidated into a persistent memory bank, enabling long-horizon reasoning while maintaining low latency. Across OVOBench-Realtime, and long-form Ego4D evaluations, Event-VStream achieves competitive performance. It improves over a VideoLLM-Online-8B baseline by +10.4 points on OVOBench-Realtime, achieves performance close to Flash-VStream-7B despite using only a general-purpose LLaMA-3-8B text backbone, and maintains around 70% GPT-5 win rate on 2-hour Ego4D streams.
Similar Papers
StreamingVLM: Real-Time Understanding for Infinite Video Streams
CV and Pattern Recognition
Lets computers watch long videos in real-time.
EventVL: Understand Event Streams via Multimodal Large Language Model
CV and Pattern Recognition
Helps computers understand fast moving events.
EventVL: Understand Event Streams via Multimodal Large Language Model
CV and Pattern Recognition
Helps computers understand fast moving things.