Score: 0

Learning Streaming Video Representation via Multitask Training

Published: April 28, 2025 | arXiv ID: 2504.20041v2

By: Yibin Yan , Jilan Xu , Shangzhe Di and more

Potential Business Impact:

Helps robots and cars understand moving pictures instantly.

Business Areas:
Video Streaming Content and Publishing, Media and Entertainment, Video

Understanding continuous video streams plays a fundamental role in real-time applications including embodied AI and autonomous driving. Unlike offline video understanding, streaming video understanding requires the ability to process video streams frame by frame, preserve historical information, and make low-latency decisions. To address these challenges, our main contributions are three-fold. (i) We develop a novel streaming video backbone, termed as StreamFormer, by incorporating causal temporal attention into a pre-trained vision transformer. This enables efficient streaming video processing while maintaining image representation capability. (ii) To train StreamFormer, we propose to unify diverse spatial-temporal video understanding tasks within a multitask visual-language alignment framework. Hence, StreamFormer learns global semantics, temporal dynamics, and fine-grained spatial relationships simultaneously. (iii) We conduct extensive experiments on online action detection, online video instance segmentation, and video question answering. StreamFormer achieves competitive results while maintaining efficiency, demonstrating its potential for real-time applications.

Page Count
16 pages

Category
Computer Science:
CV and Pattern Recognition