DOS: Distilling Observable Softmaps of Zipfian Prototypes for Self-Supervised Point Representation
By: Mohamed Abdelsamad , Michael Ulrich , Bin Yang and more
Potential Business Impact:
Teaches computers to understand 3D shapes better.
Recent advances in self-supervised learning (SSL) have shown tremendous potential for learning 3D point cloud representations without human annotations. However, SSL for 3D point clouds still faces critical challenges due to irregular geometry, shortcut-prone reconstruction, and unbalanced semantics distribution. In this work, we propose DOS (Distilling Observable Softmaps), a novel SSL framework that self-distills semantic relevance softmaps only at observable (unmasked) points. This strategy prevents information leakage from masked regions and provides richer supervision than discrete token-to-prototype assignments. To address the challenge of unbalanced semantics in an unsupervised setting, we introduce Zipfian prototypes and incorporate them using a modified Sinkhorn-Knopp algorithm, Zipf-Sinkhorn, which enforces a power-law prior over prototype usage and modulates the sharpness of the target softmap during training. DOS outperforms current state-of-the-art methods on semantic segmentation and 3D object detection across multiple benchmarks, including nuScenes, Waymo, SemanticKITTI, ScanNet, and ScanNet200, without relying on extra data or annotations. Our results demonstrate that observable-point softmaps distillation offers a scalable and effective paradigm for learning robust 3D representations.
Similar Papers
Sonata: Self-Supervised Learning of Reliable Point Representations
CV and Pattern Recognition
Teaches computers to understand 3D shapes better.
Semantic Concentration for Self-Supervised Dense Representations Learning
CV and Pattern Recognition
Teaches computers to understand tiny picture parts better.
PointDico: Contrastive 3D Representation Learning Guided by Diffusion Models
CV and Pattern Recognition
Teaches computers to understand 3D shapes better.