RCGNet: RGB-based Category-Level 6D Object Pose Estimation with Geometric Guidance
By: Sheng Yu, Di-Hua Zhai, Yuanqing Xia
Potential Business Impact:
Lets computers guess object position from pictures.
While most current RGB-D-based category-level object pose estimation methods achieve strong performance, they face significant challenges in scenes lacking depth information. In this paper, we propose a novel category-level object pose estimation approach that relies solely on RGB images. This method enables accurate pose estimation in real-world scenarios without the need for depth data. Specifically, we design a transformer-based neural network for category-level object pose estimation, where the transformer is employed to predict and fuse the geometric features of the target object. To ensure that these predicted geometric features faithfully capture the object's geometry, we introduce a geometric feature-guided algorithm, which enhances the network's ability to effectively represent the object's geometric information. Finally, we utilize the RANSAC-PnP algorithm to compute the object's pose, addressing the challenges associated with variable object scales in pose estimation. Experimental results on benchmark datasets demonstrate that our approach is not only highly efficient but also achieves superior accuracy compared to previous RGB-based methods. These promising results offer a new perspective for advancing category-level object pose estimation using RGB images.
Similar Papers
Unified Category-Level Object Detection and Pose Estimation from RGB Images using 3D Prototypes
CV and Pattern Recognition
Lets computers see objects in 3D from photos.
Beyond 'Templates': Category-Agnostic Object Pose, Size, and Shape Estimation from a Single View
CV and Pattern Recognition
Helps robots understand and grab any object.
CAP-Net: A Unified Network for 6D Pose and Size Estimation of Categorical Articulated Parts from a Single RGB-D Image
CV and Pattern Recognition
Helps robots grab and move objects better.