Structure-Aware Feature Rectification with Region Adjacency Graphs for Training-Free Open-Vocabulary Semantic Segmentation
By: Qiming Huang, Hao Ai, Jianbo Jiao
Potential Business Impact:
Makes computer pictures understand tiny details better.
Benefiting from the inductive biases learned from large-scale datasets, open-vocabulary semantic segmentation (OVSS) leverages the power of vision-language models, such as CLIP, to achieve remarkable progress without requiring task-specific training. However, due to CLIP's pre-training nature on image-text pairs, it tends to focus on global semantic alignment, resulting in suboptimal performance when associating fine-grained visual regions with text. This leads to noisy and inconsistent predictions, particularly in local areas. We attribute this to a dispersed bias stemming from its contrastive training paradigm, which is difficult to alleviate using CLIP features alone. To address this, we propose a structure-aware feature rectification approach that incorporates instance-specific priors derived directly from the image. Specifically, we construct a region adjacency graph (RAG) based on low-level features (e.g., colour and texture) to capture local structural relationships and use it to refine CLIP features by enhancing local discrimination. Extensive experiments show that our method effectively suppresses segmentation noise, improves region-level consistency, and achieves strong performance on multiple open-vocabulary segmentation benchmarks.
Similar Papers
SSR: Semantic and Spatial Rectification for CLIP-based Weakly Supervised Segmentation
CV and Pattern Recognition
Makes computer pictures more accurate by understanding words.
Annotation-Free Open-Vocabulary Segmentation for Remote-Sensing Images
CV and Pattern Recognition
Maps Earth's land without needing labels.
A Training-Free Framework for Open-Vocabulary Image Segmentation and Recognition with EfficientNet and CLIP
CV and Pattern Recognition
Lets computers find and name any object in pictures.