VTimeCoT: Thinking by Drawing for Video Temporal Grounding and Reasoning
By: Jinglei Zhang , Yuanfan Guo , Rolandos Alexandros Potamias and more
Potential Business Impact:
Helps computers understand videos by watching them.
In recent years, video question answering based on multimodal large language models (MLLM) has garnered considerable attention, due to the benefits from the substantial advancements in LLMs. However, these models have a notable deficiency in the domains of video temporal grounding and reasoning, posing challenges to the development of effective real-world video understanding systems. Inspired by how humans use video players to interact with the progress bar for video comprehension, we introduce VTimeCoT, a simple yet effective training-free framework, designed for high-performance video grounding and reasoning. The proposed framework incorporates two novel visual tools of the progress bar: a plug-and-play progress bar integration tool and a high-efficiency highlighting tool. In addition, to address the limitations of conventional text-based chain-of-thought (CoT) approaches, we introduce a visuotemporal CoT process that integrates cross-modality reasoning across both video and text. Our approach demonstrates significant performance improvements on both Qwen2VL-7B and GPT4o baselines in tasks of video temporal grounding and reasoning-based question answering. Finally, we showcase that the proposed framework achieves a compositional and interpretable reasoning process. Project page: https://vtimecot.github.io
Similar Papers
Video Finetuning Improves Reasoning Between Frames
CV and Pattern Recognition
Helps computers understand video stories better.
Thinking With Videos: Multimodal Tool-Augmented Reinforcement Learning for Long Video Reasoning
CV and Pattern Recognition
Helps computers understand long videos better.
When Thinking Drifts: Evidential Grounding for Robust Video Reasoning
CV and Pattern Recognition
Helps AI "see" and "think" better with videos.