Granular Computing-driven SAM: From Coarse-to-Fine Guidance for Prompt-Free Segmentation
By: Qiyang Yu , Yu Fang , Tianrui Li and more
Potential Business Impact:
Makes computers cut out pictures automatically.
Prompt-free image segmentation aims to generate accurate masks without manual guidance. Typical pre-trained models, notably Segmentation Anything Model (SAM), generate prompts directly at a single granularity level. However, this approach has two limitations: (1) Localizability, lacking mechanisms for autonomous region localization; (2) Scalability, limited fine-grained modeling at high resolution. To address these challenges, we introduce Granular Computing-driven SAM (Grc-SAM), a coarse-to-fine framework motivated by Granular Computing (GrC). First, the coarse stage adaptively extracts high-response regions from features to achieve precise foreground localization and reduce reliance on external prompts. Second, the fine stage applies finer patch partitioning with sparse local swin-style attention to enhance detail modeling and enable high-resolution segmentation. Third, refined masks are encoded as latent prompt embeddings for the SAM decoder, replacing handcrafted prompts with an automated reasoning process. By integrating multi-granularity attention, Grc-SAM bridges granular computing with vision transformers. Extensive experimental results demonstrate Grc-SAM outperforms baseline methods in both accuracy and scalability. It offers a unique granular computational perspective for prompt-free segmentation.
Similar Papers
UnSAMv2: Self-Supervised Learning Enables Segment Anything at Any Granularity
CV and Pattern Recognition
Lets computers cut out any object at any size.
SAM 3: Segment Anything with Concepts
CV and Pattern Recognition
Finds and tracks any object you describe.
Enhancing SAM with Efficient Prompting and Preference Optimization for Semi-supervised Medical Image Segmentation
CV and Pattern Recognition
Finds body parts in medical pictures faster.