Any6D: Model-free 6D Pose Estimation of Novel Objects
By: Taeyeop Lee , Bowen Wen , Minjun Kang and more
Potential Business Impact:
Lets robots find and grab any object perfectly.
We introduce Any6D, a model-free framework for 6D object pose estimation that requires only a single RGB-D anchor image to estimate both the 6D pose and size of unknown objects in novel scenes. Unlike existing methods that rely on textured 3D models or multiple viewpoints, Any6D leverages a joint object alignment process to enhance 2D-3D alignment and metric scale estimation for improved pose accuracy. Our approach integrates a render-and-compare strategy to generate and refine pose hypotheses, enabling robust performance in scenarios with occlusions, non-overlapping views, diverse lighting conditions, and large cross-environment variations. We evaluate our method on five challenging datasets: REAL275, Toyota-Light, HO3D, YCBINEOAT, and LM-O, demonstrating its effectiveness in significantly outperforming state-of-the-art methods for novel object pose estimation. Project page: https://taeyeop.com/any6d
Similar Papers
One2Any: One-Reference 6D Pose Estimation for Any Object
CV and Pattern Recognition
Lets robots see objects from any angle.
Novel Object 6D Pose Estimation with a Single Reference View
CV and Pattern Recognition
Lets robots find objects with just one picture.
UA-Pose: Uncertainty-Aware 6D Object Pose Estimation and Online Object Completion with Partial References
CV and Pattern Recognition
Helps robots find objects with only a few pictures.