MonoDiff9D: Monocular Category-Level 9D Object Pose Estimation via Diffusion Model
By: Jian Liu , Wei Sun , Hui Yang and more
Potential Business Impact:
Helps robots see objects without knowing their exact shape.
Object pose estimation is a core means for robots to understand and interact with their environment. For this task, monocular category-level methods are attractive as they require only a single RGB camera. However, current methods rely on shape priors or CAD models of the intra-class known objects. We propose a diffusion-based monocular category-level 9D object pose generation method, MonoDiff9D. Our motivation is to leverage the probabilistic nature of diffusion models to alleviate the need for shape priors, CAD models, or depth sensors for intra-class unknown object pose estimation. We first estimate coarse depth via DINOv2 from the monocular image in a zero-shot manner and convert it into a point cloud. We then fuse the global features of the point cloud with the input image and use the fused features along with the encoded time step to condition MonoDiff9D. Finally, we design a transformer-based denoiser to recover the object pose from Gaussian noise. Extensive experiments on two popular benchmark datasets show that MonoDiff9D achieves state-of-the-art monocular category-level 9D object pose estimation accuracy without the need for shape priors or CAD models at any stage. Our code will be made public at https://github.com/CNJianLiu/MonoDiff9D.
Similar Papers
Diff9D: Diffusion-Based Domain-Generalized Category-Level 9-DoF Object Pose Estimation
CV and Pattern Recognition
Robots learn to grab objects from computer pictures.
MonoSE(3)-Diffusion: A Monocular SE(3) Diffusion Framework for Robust Camera-to-Robot Pose Estimation
CV and Pattern Recognition
Helps robots see and move accurately.
Category-Level 6D Object Pose Estimation in Agricultural Settings Using a Lattice-Deformation Framework and Diffusion-Augmented Synthetic Data
CV and Pattern Recognition
Helps robots pick any fruit, even weird ones.