CoT-Vid: Dynamic Chain-of-Thought Routing with Self Verification for Training-Free Video Reasoning
By: Hongbo Jin , Ruyang Liu , Wenhao Zhang and more
Potential Business Impact:
Helps AI understand videos by thinking step-by-step.
System2 reasoning is developing rapidly these days with the emergence of Deep- Thinking Models and chain-of-thought technology, which has become a centralized discussion point in the AI community. However, there is a relative gap in the research on complex video reasoning at present. In this work, we propose CoT-Vid, a novel training-free paradigm for the video domain with a multistage complex reasoning design. Distinguishing from existing video LLMs, which rely heavily on perceptual abilities, it achieved surprising performance gain with explicit reasoning mechanism. The paradigm consists of three main components: dynamic inference path routing, problem decoupling strategy, and video self-consistency verification. In addition, we propose a new standard for categorization of video questions. CoT- Vid showed outstanding results on a wide range of benchmarks, and outperforms its base model by 9.3% on Egochema and 5.6% on VideoEspresso, rivalling or even surpassing larger and proprietary models, such as GPT-4V, GPT-4o and Gemini-1.5-flash. Our codebase will be publicly available soon.
Similar Papers
When Thinking Drifts: Evidential Grounding for Robust Video Reasoning
CV and Pattern Recognition
Helps AI "see" and "think" better with videos.
Rethinking Chain-of-Thought Reasoning for Videos
CV and Pattern Recognition
Makes AI understand videos faster with less data.
ThinkVideo: High-Quality Reasoning Video Segmentation with Chain of Thoughts
CV and Pattern Recognition
Helps computers track moving things in videos.