DKPMV: Dense Keypoints Fusion from Multi-View RGB Frames for 6D Pose Estimation of Textureless Objects
By: Jiahong Chen , Jinghao Wang , Zi Wang and more
Potential Business Impact:
Helps robots see and grab objects better.
6D pose estimation of textureless objects is valuable for industrial robotic applications, yet remains challenging due to the frequent loss of depth information. Current multi-view methods either rely on depth data or insufficiently exploit multi-view geometric cues, limiting their performance. In this paper, we propose DKPMV, a pipeline that achieves dense keypoint-level fusion using only multi-view RGB images as input. We design a three-stage progressive pose optimization strategy that leverages dense multi-view keypoint geometry information. To enable effective dense keypoint fusion, we enhance the keypoint network with attentional aggregation and symmetry-aware training, improving prediction accuracy and resolving ambiguities on symmetric objects. Extensive experiments on the ROBI dataset demonstrate that DKPMV outperforms state-of-the-art multi-view RGB approaches and even surpasses the RGB-D methods in the majority of cases. The code will be available soon.
Similar Papers
Active 6D Pose Estimation for Textureless Objects using Multi-View RGB Frames
CV and Pattern Recognition
Helps robots find objects even when they look the same.
AlignPose: Generalizable 6D Pose Estimation via Multi-view Feature-metric Alignment
CV and Pattern Recognition
Helps robots see objects from many angles.
KV-Tracker: Real-Time Pose Tracking with Transformers
CV and Pattern Recognition
Makes cameras see and remember places faster.