Adaptive Spatial Augmentation for Semi-supervised Semantic Segmentation
By: Lingyan Ran , Yali Li , Tao Zhuo and more
Potential Business Impact:
Teaches computers to better understand pictures.
In semi-supervised semantic segmentation (SSSS), data augmentation plays a crucial role in the weak-to-strong consistency regularization framework, as it enhances diversity and improves model generalization. Recent strong augmentation methods have primarily focused on intensity-based perturbations, which have minimal impact on the semantic masks. In contrast, spatial augmentations like translation and rotation have long been acknowledged for their effectiveness in supervised semantic segmentation tasks, but they are often ignored in SSSS. In this work, we demonstrate that spatial augmentation can also contribute to model training in SSSS, despite generating inconsistent masks between the weak and strong augmentations. Furthermore, recognizing the variability among images, we propose an adaptive augmentation strategy that dynamically adjusts the augmentation for each instance based on entropy. Extensive experiments show that our proposed Adaptive Spatial Augmentation (\textbf{ASAug}) can be integrated as a pluggable module, consistently improving the performance of existing methods and achieving state-of-the-art results on benchmark datasets such as PASCAL VOC 2012, Cityscapes, and COCO.
Similar Papers
Adversarial Semantic Augmentation for Training Generative Adversarial Networks under Limited Data
CV and Pattern Recognition
Makes AI create better pictures with less data.
Enhancing Contrastive Learning for Retinal Imaging via Adjusted Augmentation Scales
CV and Pattern Recognition
Helps AI see better in medical pictures.
An Augmentation-Aware Theory for Self-Supervised Contrastive Learning
Machine Learning (CS)
Teaches computers to learn from pictures without labels.