DINeMo: Learning Neural Mesh Models with no 3D Annotations
By: Weijie Guo , Guofeng Zhang , Wufei Ma and more
Potential Business Impact:
Teaches robots to see objects in 3D without special labels.
Category-level 3D/6D pose estimation is a crucial step towards comprehensive 3D scene understanding, which would enable a broad range of applications in robotics and embodied AI. Recent works explored neural mesh models that approach a range of 2D and 3D tasks from an analysis-by-synthesis perspective. Despite the largely enhanced robustness to partial occlusion and domain shifts, these methods depended heavily on 3D annotations for part-contrastive learning, which confines them to a narrow set of categories and hinders efficient scaling. In this work, we present DINeMo, a novel neural mesh model that is trained with no 3D annotations by leveraging pseudo-correspondence obtained from large visual foundation models. We adopt a bidirectional pseudo-correspondence generation method, which produce pseudo correspondence utilize both local appearance features and global context information. Experimental results on car datasets demonstrate that our DINeMo outperforms previous zero- and few-shot 3D pose estimation by a wide margin, narrowing the gap with fully-supervised methods by 67.3%. Our DINeMo also scales effectively and efficiently when incorporating more unlabeled images during training, which demonstrate the advantages over supervised learning methods that rely on 3D annotations. Our project page is available at https://analysis-by-synthesis.github.io/DINeMo/.
Similar Papers
Multi-Modal 3D Mesh Reconstruction from Images and Text
CV and Pattern Recognition
Builds 3D shapes from a few pictures.
MonoDiff9D: Monocular Category-Level 9D Object Pose Estimation via Diffusion Model
CV and Pattern Recognition
Helps robots see objects without knowing their exact shape.
Universal Features Guided Zero-Shot Category-Level Object Pose Estimation
CV and Pattern Recognition
Teaches robots to grab new things they've never seen.