ProbeMDE: Uncertainty-Guided Active Proprioception for Monocular Depth Estimation in Surgical Robotics
By: Britton Jordan , Jordan Thompson , Jesse F. d'Almeida and more
Monocular depth estimation (MDE) provides a useful tool for robotic perception, but its predictions are often uncertain and inaccurate in challenging environments such as surgical scenes where textureless surfaces, specular reflections, and occlusions are common. To address this, we propose ProbeMDE, a cost-aware active sensing framework that combines RGB images with sparse proprioceptive measurements for MDE. Our approach utilizes an ensemble of MDE models to predict dense depth maps conditioned on both RGB images and on a sparse set of known depth measurements obtained via proprioception, where the robot has touched the environment in a known configuration. We quantify predictive uncertainty via the ensemble's variance and measure the gradient of the uncertainty with respect to candidate measurement locations. To prevent mode collapse while selecting maximally informative locations to propriocept (touch), we leverage Stein Variational Gradient Descent (SVGD) over this gradient map. We validate our method in both simulated and physical experiments on central airway obstruction surgical phantoms. Our results demonstrate that our approach outperforms baseline methods across standard depth estimation metrics, achieving higher accuracy while minimizing the number of required proprioceptive measurements.
Similar Papers
Monocular absolute depth estimation from endoscopy via domain-invariant feature learning and latent consistency
CV and Pattern Recognition
Helps robot doctors see depth in surgery.
UM-Depth : Uncertainty Masked Self-Supervised Monocular Depth Estimation with Visual Odometry
CV and Pattern Recognition
Makes self-driving cars see better in tricky spots.
PROBE: Proprioceptive Obstacle Detection and Estimation while Navigating in Clutter
Robotics
Robot feels unseen obstacles to move safely.