Beyond 'Templates': Category-Agnostic Object Pose, Size, and Shape Estimation from a Single View
By: Jinyu Zhang , Haitao Lin , Jiashu Hou and more
Potential Business Impact:
Helps robots understand and grab any object.
Estimating an object's 6D pose, size, and shape from visual input is a fundamental problem in computer vision, with critical applications in robotic grasping and manipulation. Existing methods either rely on object-specific priors such as CAD models or templates, or suffer from limited generalization across categories due to pose-shape entanglement and multi-stage pipelines. In this work, we propose a unified, category-agnostic framework that simultaneously predicts 6D pose, size, and dense shape from a single RGB-D image, without requiring templates, CAD models, or category labels at test time. Our model fuses dense 2D features from vision foundation models with partial 3D point clouds using a Transformer encoder enhanced by a Mixture-of-Experts, and employs parallel decoders for pose-size estimation and shape reconstruction, achieving real-time inference at 28 FPS. Trained solely on synthetic data from 149 categories in the SOPE dataset, our framework is evaluated on four diverse benchmarks SOPE, ROPE, ObjaversePose, and HANDAL, spanning over 300 categories. It achieves state-of-the-art accuracy on seen categories while demonstrating remarkably strong zero-shot generalization to unseen real-world objects, establishing a new standard for open-set 6D understanding in robotics and embodied AI.
Similar Papers
Unified Category-Level Object Detection and Pose Estimation from RGB Images using 3D Prototypes
CV and Pattern Recognition
Lets computers see objects in 3D from photos.
RCGNet: RGB-based Category-Level 6D Object Pose Estimation with Geometric Guidance
CV and Pattern Recognition
Lets computers guess object position from pictures.
Box6D : Zero-shot Category-level 6D Pose Estimation of Warehouse Boxes
CV and Pattern Recognition
Helps robots find and grab boxes faster.