Score: 0

OPFormer: Object Pose Estimation leveraging foundation model with geometric encoding

Published: November 16, 2025 | arXiv ID: 2511.12614v1

By: Artem Moroz , Vít Zeman , Martin Mikšík and more

Potential Business Impact:

Helps robots see and grab objects perfectly.

Business Areas:
Image Recognition Data and Analytics, Software

We introduce a unified, end-to-end framework that seamlessly integrates object detection and pose estimation with a versatile onboarding process. Our pipeline begins with an onboarding stage that generates object representations from either traditional 3D CAD models or, in their absence, by rapidly reconstructing a high-fidelity neural representation (NeRF) from multi-view images. Given a test image, our system first employs the CNOS detector to localize target objects. For each detection, our novel pose estimation module, OPFormer, infers the precise 6D pose. The core of OPFormer is a transformer-based architecture that leverages a foundation model for robust feature extraction. It uniquely learns a comprehensive object representation by jointly encoding multiple template views and enriches these features with explicit 3D geometric priors using Normalized Object Coordinate Space (NOCS). A decoder then establishes robust 2D-3D correspondences to determine the final pose. Evaluated on the challenging BOP benchmarks, our integrated system demonstrates a strong balance between accuracy and efficiency, showcasing its practical applicability in both model-based and model-free scenarios.

Page Count
24 pages

Category
Computer Science:
CV and Pattern Recognition