PointDico: Contrastive 3D Representation Learning Guided by Diffusion Models
By: Pengbo Li, Yiding Sun, Haozhe Cheng
Self-supervised representation learning has shown significant improvement in Natural Language Processing and 2D Computer Vision. However, existing methods face difficulties in representing 3D data because of its unordered and uneven density. Through an in-depth analysis of mainstream contrastive and generative approaches, we find that contrastive models tend to suffer from overfitting, while 3D Mask Autoencoders struggle to handle unordered point clouds. This motivates us to learn 3D representations by sharing the merits of diffusion and contrast models, which is non-trivial due to the pattern difference between the two paradigms. In this paper, we propose \textit{PointDico}, a novel model that seamlessly integrates these methods. \textit{PointDico} learns from both denoising generative modeling and cross-modal contrastive learning through knowledge distillation, where the diffusion model serves as a guide for the contrastive model. We introduce a hierarchical pyramid conditional generator for multi-scale geometric feature extraction and employ a dual-channel design to effectively integrate local and global contextual information. \textit{PointDico} achieves a new state-of-the-art in 3D representation learning, \textit{e.g.}, \textbf{94.32\%} accuracy on ScanObjectNN, \textbf{86.5\%} Inst. mIoU on ShapeNetPart.
Similar Papers
KeyPointDiffuser: Unsupervised 3D Keypoint Learning via Latent Diffusion Models
CV and Pattern Recognition
Teaches computers to see and build 3D shapes.
MedDIFT: Multi-Scale Diffusion-Based Correspondence in 3D Medical Imaging
CV and Pattern Recognition
Matches medical scans better without training.
3D CoCa: Contrastive Learners are 3D Captioners
CV and Pattern Recognition
Helps computers describe 3D spaces with words.