Score: 0

Event-VStream: Event-Driven Real-Time Understanding for Long Video Streams

Published: January 22, 2026 | arXiv ID: 2601.15655v1

By: Zhenghui Guo , Yuanbin Man , Junyuan Sheng and more

Potential Business Impact:

Lets computers understand long videos without forgetting.

Business Areas:
Video Streaming Content and Publishing, Media and Entertainment, Video

Real-time understanding of long video streams remains challenging for multimodal large language models (VLMs) due to redundant frame processing and rapid forgetting of past context. Existing streaming systems rely on fixed-interval decoding or cache pruning, which either produce repetitive outputs or discard crucial temporal information. We introduce Event-VStream, an event-aware framework that represents continuous video as a sequence of discrete, semantically coherent events. Our system detects meaningful state transitions by integrating motion, semantic, and predictive cues, and triggers language generation only at those boundaries. Each event embedding is consolidated into a persistent memory bank, enabling long-horizon reasoning while maintaining low latency. Across OVOBench-Realtime, and long-form Ego4D evaluations, Event-VStream achieves competitive performance. It improves over a VideoLLM-Online-8B baseline by +10.4 points on OVOBench-Realtime, achieves performance close to Flash-VStream-7B despite using only a general-purpose LLaMA-3-8B text backbone, and maintains around 70% GPT-5 win rate on 2-hour Ego4D streams.

Page Count
11 pages

Category
Computer Science:
CV and Pattern Recognition