S2AM3D: Scale-controllable Part Segmentation of 3D Point Cloud
By: Han Su , Tianyu Huang , Zichen Wan and more
Potential Business Impact:
Helps computers understand 3D shapes by parts.
Part-level point cloud segmentation has recently attracted significant attention in 3D computer vision. Nevertheless, existing research is constrained by two major challenges: native 3D models lack generalization due to data scarcity, while introducing 2D pre-trained knowledge often leads to inconsistent segmentation results across different views. To address these challenges, we propose S2AM3D, which incorporates 2D segmentation priors with 3D consistent supervision. We design a point-consistent part encoder that aggregates multi-view 2D features through native 3D contrastive learning, producing globally consistent point features. A scale-aware prompt decoder is then proposed to enable real-time adjustment of segmentation granularity via continuous scale signals. Simultaneously, we introduce a large-scale, high-quality part-level point cloud dataset with more than 100k samples, providing ample supervision signals for model training. Extensive experiments demonstrate that S2AM3D achieves leading performance across multiple evaluation settings, exhibiting exceptional robustness and controllability when handling complex structures and parts with significant size variations.
Similar Papers
P3-SAM: Native 3D Part Segmentation
CV and Pattern Recognition
Breaks down 3D objects into parts automatically.
P3-SAM: Native 3D Part Segmentation
CV and Pattern Recognition
Breaks down 3D objects into parts automatically.
Integrating SAM Supervision for 3D Weakly Supervised Point Cloud Segmentation
CV and Pattern Recognition
Helps computers understand 3D shapes with less 3D data.