Datasets and Recipes for Video Temporal Grounding via Reinforcement Learning
By: Ruizhe Chen , Zhiting Fan , Tianze Luo and more
Potential Business Impact:
Finds exact moments in videos from words.
Video Temporal Grounding (VTG) aims to localize relevant temporal segments in videos given natural language queries. Despite recent progress with large vision-language models (LVLMs) and instruction-tuning, existing approaches often suffer from limited temporal awareness and poor generalization. In this work, we introduce a two-stage training framework that integrates supervised fine-tuning with reinforcement learning (RL) to improve both the accuracy and robustness of VTG models. Our approach first leverages high-quality curated cold start data for SFT initialization, followed by difficulty-controlled RL to further enhance temporal localization and reasoning abilities. Comprehensive experiments on multiple VTG benchmarks demonstrate that our method consistently outperforms existing models, particularly in challenging and open-domain scenarios. We conduct an in-depth analysis of training strategies and dataset curation, highlighting the importance of both high-quality cold start data and difficulty-controlled RL. To facilitate further research and industrial adoption, we release all intermediate datasets, models, and code to the community.
Similar Papers
VideoTG-R1: Boosting Video Temporal Grounding via Curriculum Reinforcement Learning on Reflected Boundary Annotations
CV and Pattern Recognition
Finds video clips matching descriptions faster.
Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding
CV and Pattern Recognition
Helps computers find video clips from descriptions.
TAR-TVG: Enhancing VLMs with Timestamp Anchor-Constrained Reasoning for Temporal Video Grounding
CV and Pattern Recognition
Finds exact moments in videos using words.