RSAgent: Learning to Reason and Act for Text-Guided Segmentation via Multi-Turn Tool Invocations
By: Xingqi He , Yujie Zhang , Shuyong Gao and more
Text-guided object segmentation requires both cross-modal reasoning and pixel grounding abilities. Most recent methods treat text-guided segmentation as one-shot grounding, where the model predicts pixel prompts in a single forward pass to drive an external segmentor, which limits verification, refocusing and refinement when initial localization is wrong. To address this limitation, we propose RSAgent, an agentic Multimodal Large Language Model (MLLM) which interleaves reasoning and action for segmentation via multi-turn tool invocations. RSAgent queries a segmentation toolbox, observes visual feedback, and revises its spatial hypothesis using historical observations to re-localize targets and iteratively refine masks. We further build a data pipeline to synthesize multi-turn reasoning segmentation trajectories, and train RSAgent with a two-stage framework: cold-start supervised fine-tuning followed by agentic reinforcement learning with fine-grained, task-specific rewards. Extensive experiments show that RSAgent achieves a zero-shot performance of 66.5% gIoU on ReasonSeg test, improving over Seg-Zero-7B by 9%, and reaches 81.5% cIoU on RefCOCOg, demonstrating state-of-the-art performance on both in-domain and out-of-domain benchmarks.
Similar Papers
Guideline-Consistent Segmentation via Multi-Agent Refinement
CV and Pattern Recognition
Makes computer pictures follow tricky rules perfectly.
Bridging Semantics and Geometry: A Decoupled LVLM-SAM Framework for Reasoning Segmentation in Remote Sensing
CV and Pattern Recognition
Helps computers understand satellite pictures better.
VideoSeg-R1:Reasoning Video Object Segmentation via Reinforcement Learning
CV and Pattern Recognition
Teaches computers to understand and cut out moving objects.