TextureSAM: Towards a Texture Aware Foundation Model for Segmentation
By: Inbal Cohen , Boaz Meivar , Peihan Tu and more
Potential Business Impact:
Helps computers see objects by their feel, not shape.
Segment Anything Models (SAM) have achieved remarkable success in object segmentation tasks across diverse datasets. However, these models are predominantly trained on large-scale semantic segmentation datasets, which introduce a bias toward object shape rather than texture cues in the image. This limitation is critical in domains such as medical imaging, material classification, and remote sensing, where texture changes define object boundaries. In this study, we investigate SAM's bias toward semantics over textures and introduce a new texture-aware foundation model, TextureSAM, which performs superior segmentation in texture-dominant scenarios. To achieve this, we employ a novel fine-tuning approach that incorporates texture augmentation techniques, incrementally modifying training images to emphasize texture features. By leveraging a novel texture-alternation of the ADE20K dataset, we guide TextureSAM to prioritize texture-defined regions, thereby mitigating the inherent shape bias present in the original SAM model. Our extensive experiments demonstrate that TextureSAM significantly outperforms SAM-2 on both natural (+0.2 mIoU) and synthetic (+0.18 mIoU) texture-based segmentation datasets. The code and texture-augmented dataset will be publicly available.
Similar Papers
SAM-aware Test-time Adaptation for Universal Medical Image Segmentation
CV and Pattern Recognition
Helps doctors see inside bodies better.
MedicoSAM: Towards foundation models for medical image segmentation
Image and Video Processing
Helps doctors see inside bodies better.
S^4M: Boosting Semi-Supervised Instance Segmentation with SAM
CV and Pattern Recognition
Teaches computers to find and label things in pictures.