Score: 0

Think-Clip-Sample: Slow-Fast Frame Selection for Video Understanding

Published: January 16, 2026 | arXiv ID: 2601.11359v1

By: Wenhui Tan , Ruihua Song , Jiaze Li and more

Potential Business Impact:

Better AI understands long videos faster.

Business Areas:
Image Recognition Data and Analytics, Software

Recent progress in multi-modal large language models (MLLMs) has significantly advanced video understanding. However, their performance on long-form videos remains limited by computational constraints and suboptimal frame selection. We present Think-Clip-Sample (TCS), a training-free framework that enhances long video understanding through two key components: (i) Multi-Query Reasoning, which generates multiple queries to capture complementary aspects of the question and video; and (ii) Clip-level Slow-Fast Sampling, which adaptively balances dense local details and sparse global context. Extensive experiments on MLVU, LongVideoBench, and VideoMME demonstrate that TCS consistently improves performance across different MLLMs, boosting up to 6.9% accuracy, and is capable of achieving comparable accuracy with 50% fewer inference time cost, highlighting both efficiency and efficacy of TCS on long video understanding.

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition