Score: 2

TextureSAM: Towards a Texture Aware Foundation Model for Segmentation

Published: May 22, 2025 | arXiv ID: 2505.16540v1

By: Inbal Cohen , Boaz Meivar , Peihan Tu and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Helps computers see objects by their feel, not shape.

Business Areas:
Semantic Search Internet Services

Segment Anything Models (SAM) have achieved remarkable success in object segmentation tasks across diverse datasets. However, these models are predominantly trained on large-scale semantic segmentation datasets, which introduce a bias toward object shape rather than texture cues in the image. This limitation is critical in domains such as medical imaging, material classification, and remote sensing, where texture changes define object boundaries. In this study, we investigate SAM's bias toward semantics over textures and introduce a new texture-aware foundation model, TextureSAM, which performs superior segmentation in texture-dominant scenarios. To achieve this, we employ a novel fine-tuning approach that incorporates texture augmentation techniques, incrementally modifying training images to emphasize texture features. By leveraging a novel texture-alternation of the ADE20K dataset, we guide TextureSAM to prioritize texture-defined regions, thereby mitigating the inherent shape bias present in the original SAM model. Our extensive experiments demonstrate that TextureSAM significantly outperforms SAM-2 on both natural (+0.2 mIoU) and synthetic (+0.18 mIoU) texture-based segmentation datasets. The code and texture-augmented dataset will be publicly available.

Country of Origin
🇺🇸 🇮🇱 United States, Israel

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition