LENS: Learning to Segment Anything with Unified Reinforced Reasoning
By: Lianghui Zhu , Bin Ouyang , Yuxuan Zhang and more
Potential Business Impact:
Teaches computers to cut out pictures from words.
Text-prompted image segmentation enables fine-grained visual understanding and is critical for applications such as human-computer interaction and robotics. However, existing supervised fine-tuning methods typically ignore explicit chain-of-thought (CoT) reasoning at test time, which limits their ability to generalize to unseen prompts and domains. To address this issue, we introduce LENS, a scalable reinforcement-learning framework that jointly optimizes the reasoning process and segmentation in an end-to-end manner. We propose unified reinforcement-learning rewards that span sentence-, box-, and segment-level cues, encouraging the model to generate informative CoT rationales while refining mask quality. Using a publicly available 3-billion-parameter vision-language model, i.e., Qwen2.5-VL-3B-Instruct, LENS achieves an average cIoU of 81.2% on the RefCOCO, RefCOCO+, and RefCOCOg benchmarks, outperforming the strong fine-tuned method, i.e., GLaMM, by up to 5.6%. These results demonstrate that RL-driven CoT reasoning serves as a robust prior for text-prompted segmentation and offers a practical path toward more generalizable Segment Anything models. Code is available at https://github.com/hustvl/LENS.
Similar Papers
Reinforcing Video Reasoning Segmentation to Think Before It Segments
CV and Pattern Recognition
Helps computers understand what you want to see in videos.
ReVSeg: Incentivizing the Reasoning Chain for Video Segmentation with Reinforcement Learning
CV and Pattern Recognition
Helps computers understand moving objects in videos.
VideoSeg-R1:Reasoning Video Object Segmentation via Reinforcement Learning
CV and Pattern Recognition
Teaches computers to understand and cut out moving objects.