Sonata: Self-Supervised Learning of Reliable Point Representations
By: Xiaoyang Wu , Daniel DeTone , Duncan Frost and more
Potential Business Impact:
Teaches computers to understand 3D shapes better.
In this paper, we question whether we have a reliable self-supervised point cloud model that can be used for diverse 3D tasks via simple linear probing, even with limited data and minimal computation. We find that existing 3D self-supervised learning approaches fall short when evaluated on representation quality through linear probing. We hypothesize that this is due to what we term the "geometric shortcut", which causes representations to collapse to low-level spatial features. This challenge is unique to 3D and arises from the sparse nature of point cloud data. We address it through two key strategies: obscuring spatial information and enhancing the reliance on input features, ultimately composing a Sonata of 140k point clouds through self-distillation. Sonata is simple and intuitive, yet its learned representations are strong and reliable: zero-shot visualizations demonstrate semantic grouping, alongside strong spatial reasoning through nearest-neighbor relationships. Sonata demonstrates exceptional parameter and data efficiency, tripling linear probing accuracy (from 21.8% to 72.5%) on ScanNet and nearly doubling performance with only 1% of the data compared to previous approaches. Full fine-tuning further advances SOTA across both 3D indoor and outdoor perception tasks.
Similar Papers
Concerto: Joint 2D-3D Self-Supervised Learning Emerges Spatial Representations
CV and Pattern Recognition
Teaches computers to understand 3D spaces like humans.
PSA-SSL: Pose and Size-aware Self-Supervised Learning on LiDAR Point Clouds
CV and Pattern Recognition
Teaches cars to see and understand 3D shapes.
Self-Supervised Moving Object Segmentation of Sparse and Noisy Radar Point Clouds
CV and Pattern Recognition
Helps self-driving cars see moving things faster.