ReVSeg: Incentivizing the Reasoning Chain for Video Segmentation with Reinforcement Learning
By: Yifan Li , Yingda Yin , Lingting Zhu and more
Potential Business Impact:
Helps computers understand moving objects in videos.
Reasoning-centric video object segmentation is an inherently complex task: the query often refers to dynamics, causality, and temporal interactions, rather than static appearances. Yet existing solutions generally collapse these factors into simplified reasoning with latent embeddings, rendering the reasoning chain opaque and essentially intractable. We therefore adopt an explicit decomposition perspective and introduce ReVSeg, which executes reasoning as sequential decisions in the native interface of pretrained vision language models (VLMs). Rather than folding all reasoning into a single-step prediction, ReVSeg executes three explicit operations -- semantics interpretation, temporal evidence selection, and spatial grounding -- aligning pretrained capabilities. We further employ reinforcement learning to optimize the multi-step reasoning chain, enabling the model to self-refine its decision quality from outcome-driven signals. Experimental results demonstrate that ReVSeg attains state-of-the-art performances on standard video object segmentation benchmarks and yields interpretable reasoning trajectories. Project page is available at https://clementine24.github.io/ReVSeg/ .
Similar Papers
VideoSeg-R1:Reasoning Video Object Segmentation via Reinforcement Learning
CV and Pattern Recognition
Teaches computers to understand and cut out moving objects.
Reinforcing Video Reasoning Segmentation to Think Before It Segments
CV and Pattern Recognition
Helps computers understand what you want to see in videos.
Text-Driven Reasoning Video Editing via Reinforcement Learning on Digital Twin Representations
CV and Pattern Recognition
Lets you edit videos by just describing changes.