Open-ended Hierarchical Streaming Video Understanding with Vision Language Models
By: Hyolim Kang , Yunsu Park , Youngbeom Yoo and more
Potential Business Impact:
Lets computers understand and describe videos as they happen.
We introduce Hierarchical Streaming Video Understanding, a task that combines online temporal action localization with free-form description generation. Given the scarcity of datasets with hierarchical and fine-grained temporal annotations, we demonstrate that LLMs can effectively group atomic actions into higher-level events, enriching existing datasets. We then propose OpenHOUSE (Open-ended Hierarchical Online Understanding System for Events), which extends streaming action perception beyond action classification. OpenHOUSE features a specialized streaming module that accurately detects boundaries between closely adjacent actions, nearly doubling the performance of direct extensions of existing methods. We envision the future of streaming action perception in the integration of powerful generative models, with OpenHOUSE representing a key step in that direction.
Similar Papers
StreamingVLM: Real-Time Understanding for Infinite Video Streams
CV and Pattern Recognition
Lets computers watch long videos in real-time.
Xiaoice: Training-Free Video Understanding via Self-Supervised Spatio-Temporal Clustering of Semantic Features
CV and Pattern Recognition
Makes computers understand videos without extra training.
StreamAgent: Towards Anticipatory Agents for Streaming Video Understanding
CV and Pattern Recognition
Helps self-driving cars see future events.