video-SALMONN S: Streaming Audio-Visual LLMs Beyond Length Limits via Memory
By: Guangzhi Sun , Yixuan Li , Xiaodong Wu and more
Potential Business Impact:
Lets AI watch hours of video at once.
Continuous, high-frame-rate, high-resolution processing of long video streams is critical for future AI agents, yet current video-understanding LLMs struggle to scale. Offline, fixed-frame-number methods require the stream length to adapt frame rates; streaming methods constrain memory by merging or discarding tokens, losing information. We propose video-SALMONN S, a streaming audio-visual LLM that, to our knowledge, is the first to process 3-hour videos at 1 FPS and 360p resolution under a fixed memory budget. Our model introduces (i) a test-time-training (TTT) memory module that continually updates token representations to capture long-range dependencies by replacing token merging, and (ii) a prompt-dependent memory reader that selectively retrieves context-relevant content from fixed-size memory. The TTT module is optimised with a Hessian-free conjugate-gradient procedure (TTT_HF) for efficient adaptation. On long-video benchmarks (Video-MME, LVBench, VideoEvalPro), video-SALMONN S sustains high-quality understanding on multi-hour videos with 10k frames and 1M tokens. Our 8B-parameter model achieves 74.2% overall and 67.8% on the Video-MME long split, outperforming both offline and streaming baselines.
Similar Papers
StreamingVLM: Real-Time Understanding for Infinite Video Streams
CV and Pattern Recognition
Lets computers watch long videos in real-time.
VideoMem: Enhancing Ultra-Long Video Understanding via Adaptive Memory Management
CV and Pattern Recognition
Lets computers watch and remember long videos.
video-SALMONN 2: Captioning-Enhanced Audio-Visual Large Language Models
CV and Pattern Recognition
Makes videos explain themselves with words.