SSR: Semantic and Spatial Rectification for CLIP-based Weakly Supervised Segmentation
By: Xiuli Bi , Die Xiao , Junchao Fan and more
Potential Business Impact:
Makes computer pictures more accurate by understanding words.
In recent years, Contrastive Language-Image Pretraining (CLIP) has been widely applied to Weakly Supervised Semantic Segmentation (WSSS) tasks due to its powerful cross-modal semantic understanding capabilities. This paper proposes a novel Semantic and Spatial Rectification (SSR) method to address the limitations of existing CLIP-based weakly supervised semantic segmentation approaches: over-activation in non-target foreground regions and background areas. Specifically, at the semantic level, the Cross-Modal Prototype Alignment (CMPA) establishes a contrastive learning mechanism to enforce feature space alignment across modalities, reducing inter-class overlap while enhancing semantic correlations, to rectify over-activation in non-target foreground regions effectively; at the spatial level, the Superpixel-Guided Correction (SGC) leverages superpixel-based spatial priors to precisely filter out interference from non-target regions during affinity propagation, significantly rectifying background over-activation. Extensive experiments on the PASCAL VOC and MS COCO datasets demonstrate that our method outperforms all single-stage approaches, as well as more complex multi-stage approaches, achieving mIoU scores of 79.5% and 50.6%, respectively.
Similar Papers
Contrastive Prompt Clustering for Weakly Supervised Semantic Segmentation
CV and Pattern Recognition
Teaches computers to see objects more precisely.
Refining CLIP's Spatial Awareness: A Visual-Centric Perspective
CV and Pattern Recognition
Helps computers understand pictures and where things are.
LPD: Learnable Prototypes with Diversity Regularization for Weakly Supervised Histopathology Segmentation
CV and Pattern Recognition
Finds cancer cells better in pictures.