Score: 1

Integrating SAM Supervision for 3D Weakly Supervised Point Cloud Segmentation

Published: August 27, 2025 | arXiv ID: 2508.19909v1

By: Lechun You , Zhonghua Wu , Weide Liu and more

Potential Business Impact:

Helps computers understand 3D shapes with less 3D data.

Business Areas:
Image Recognition Data and Analytics, Software

Current methods for 3D semantic segmentation propose training models with limited annotations to address the difficulty of annotating large, irregular, and unordered 3D point cloud data. They usually focus on the 3D domain only, without leveraging the complementary nature of 2D and 3D data. Besides, some methods extend original labels or generate pseudo labels to guide the training, but they often fail to fully use these labels or address the noise within them. Meanwhile, the emergence of comprehensive and adaptable foundation models has offered effective solutions for segmenting 2D data. Leveraging this advancement, we present a novel approach that maximizes the utility of sparsely available 3D annotations by incorporating segmentation masks generated by 2D foundation models. We further propagate the 2D segmentation masks into the 3D space by establishing geometric correspondences between 3D scenes and 2D views. We extend the highly sparse annotations to encompass the areas delineated by 3D masks, thereby substantially augmenting the pool of available labels. Furthermore, we apply confidence- and uncertainty-based consistency regularization on augmentations of the 3D point cloud and select the reliable pseudo labels, which are further spread on the 3D masks to generate more labels. This innovative strategy bridges the gap between limited 3D annotations and the powerful capabilities of 2D foundation models, ultimately improving the performance of 3D weakly supervised segmentation.

Country of Origin
πŸ‡¬πŸ‡§ πŸ‡ΈπŸ‡¬ United Kingdom, Singapore

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition