Think-Clip-Sample: Slow-Fast Frame Selection for Video Understanding
By: Wenhui Tan , Ruihua Song , Jiaze Li and more
Potential Business Impact:
Better AI understands long videos faster.
Recent progress in multi-modal large language models (MLLMs) has significantly advanced video understanding. However, their performance on long-form videos remains limited by computational constraints and suboptimal frame selection. We present Think-Clip-Sample (TCS), a training-free framework that enhances long video understanding through two key components: (i) Multi-Query Reasoning, which generates multiple queries to capture complementary aspects of the question and video; and (ii) Clip-level Slow-Fast Sampling, which adaptively balances dense local details and sparse global context. Extensive experiments on MLVU, LongVideoBench, and VideoMME demonstrate that TCS consistently improves performance across different MLLMs, boosting up to 6.9% accuracy, and is capable of achieving comparable accuracy with 50% fewer inference time cost, highlighting both efficiency and efficacy of TCS on long video understanding.
Similar Papers
Video-LLMs with Temporal Visual Screening
CV and Pattern Recognition
Helps computers understand videos better by focusing.
From Frames to Clips: Efficient Key Clip Selection for Long-Form Video Understanding
CV and Pattern Recognition
Helps computers understand long videos better.
HFS: Holistic Query-Aware Frame Selection for Efficient Video Reasoning
CV and Pattern Recognition
Finds the most important moments in videos.