Temporally-Constrained Video Reasoning Segmentation and Automated Benchmark Construction
By: Yiqing Shen , Chenjia Li , Chenxiao Fan and more
Potential Business Impact:
Finds objects in videos using text descriptions.
Conventional approaches to video segmentation are confined to predefined object categories and cannot identify out-of-vocabulary objects, let alone objects that are not identified explicitly but only referred to implicitly in complex text queries. This shortcoming limits the utility for video segmentation in complex and variable scenarios, where a closed set of object categories is difficult to define and where users may not know the exact object category that will appear in the video. Such scenarios can arise in operating room video analysis, where different health systems may use different workflows and instrumentation, requiring flexible solutions for video analysis. Reasoning segmentation (RS) now offers promise towards such a solution, enabling natural language text queries as interaction for identifying object to segment. However, existing video RS formulation assume that target objects remain contextually relevant throughout entire video sequences. This assumption is inadequate for real-world scenarios in which objects of interest appear, disappear or change relevance dynamically based on temporal context, such as surgical instruments that become relevant only during specific procedural phases or anatomical structures that gain importance at particular moments during surgery. Our first contribution is the introduction of temporally-constrained video reasoning segmentation, a novel task formulation that requires models to implicitly infer when target objects become contextually relevant based on text queries that incorporate temporal reasoning. Since manual annotation of temporally-constrained video RS datasets would be expensive and limit scalability, our second contribution is an innovative automated benchmark construction method. Finally, we present TCVideoRSBenchmark, a temporally-constrained video RS dataset containing 52 samples using the videos from the MVOR dataset.
Similar Papers
Reasoning Segmentation for Images and Videos: A Survey
CV and Pattern Recognition
Lets computers understand what you mean by words.
The Devil is in Temporal Token: High Quality Video Reasoning Segmentation
CV and Pattern Recognition
Helps videos understand and track moving objects better.
Reinforcing Video Reasoning Segmentation to Think Before It Segments
CV and Pattern Recognition
Helps computers understand what you want to see in videos.