Co-op: Correspondence-based Novel Object Pose Estimation
By: Sungphill Moon , Hyeontae Son , Dongcheol Hur and more
Potential Business Impact:
Helps robots see and grab objects they've never seen.
We propose Co-op, a novel method for accurately and robustly estimating the 6DoF pose of objects unseen during training from a single RGB image. Our method requires only the CAD model of the target object and can precisely estimate its pose without any additional fine-tuning. While existing model-based methods suffer from inefficiency due to using a large number of templates, our method enables fast and accurate estimation with a small number of templates. This improvement is achieved by finding semi-dense correspondences between the input image and the pre-rendered templates. Our method achieves strong generalization performance by leveraging a hybrid representation that combines patch-level classification and offset regression. Additionally, our pose refinement model estimates probabilistic flow between the input image and the rendered image, refining the initial estimate to an accurate pose using a differentiable PnP layer. We demonstrate that our method not only estimates object poses rapidly but also outperforms existing methods by a large margin on the seven core datasets of the BOP Challenge, achieving state-of-the-art accuracy.
Similar Papers
PicoPose: Progressive Pixel-to-Pixel Correspondence Learning for Novel Object Pose Estimation
CV and Pattern Recognition
Helps robots find and grab new things.
One2Any: One-Reference 6D Pose Estimation for Any Object
CV and Pattern Recognition
Lets robots see objects from any angle.
RefPose: Leveraging Reference Geometric Correspondences for Accurate 6D Pose Estimation of Unseen Objects
CV and Pattern Recognition
Helps robots find and grab new things.