Object Pose Distribution Estimation for Determining Revolution and Reflection Uncertainty in Point Clouds
By: Frederik Hagelskjær , Dimitrios Arapis , Steffen Madsen and more
Potential Business Impact:
Helps robots see objects even without color.
Object pose estimation is crucial to robotic perception and typically provides a single-pose estimate. However, a single estimate cannot capture pose uncertainty deriving from visual ambiguity, which can lead to unreliable behavior. Existing pose distribution methods rely heavily on color information, often unavailable in industrial settings. We propose a novel neural network-based method for estimating object pose uncertainty using only 3D colorless data. To the best of our knowledge, this is the first approach that leverages deep learning for pose distribution estimation without relying on RGB input. We validate our method in a real-world bin picking scenario with objects of varying geometric ambiguity. Our current implementation focuses on symmetries in reflection and revolution, but the framework is extendable to full SE(3) pose distribution estimation. Source code available at opde3d.github.io
Similar Papers
SE(3)-PoseFlow: Estimating 6D Pose Distributions for Uncertainty-Aware Robotic Manipulation
CV and Pattern Recognition
Helps robots know exactly where objects are.
Uncertainty Quantification for Visual Object Pose Estimation
Robotics
Tells robots how sure they are about where things are.
Unified Category-Level Object Detection and Pose Estimation from RGB Images using 3D Prototypes
CV and Pattern Recognition
Lets computers see objects in 3D from photos.