MonoMPC: Monocular Vision Based Navigation with Learned Collision Model and Risk-Aware Model Predictive Control
By: Basant Sharma , Prajyot Jadhav , Pranjal Paul and more
Potential Business Impact:
Robot navigates safely through messy places.
Navigating unknown environments with a single RGB camera is challenging, as the lack of depth information prevents reliable collision-checking. While some methods use estimated depth to build collision maps, we found that depth estimates from vision foundation models are too noisy for zero-shot navigation in cluttered environments. We propose an alternative approach: instead of using noisy estimated depth for direct collision-checking, we use it as a rich context input to a learned collision model. This model predicts the distribution of minimum obstacle clearance that the robot can expect for a given control sequence. At inference, these predictions inform a risk-aware MPC planner that minimizes estimated collision risk. Our joint learning pipeline co-trains the collision model and risk metric using both safe and unsafe trajectories. Crucially, our joint-training ensures optimal variance in our collision model that improves navigation in highly cluttered environments. Consequently, real-world experiments show 9x and 7x improvements in success rates over NoMaD and the ROS stack, respectively. Ablation studies further validate the effectiveness of our design choices.
Similar Papers
Collision avoidance from monocular vision trained with novel view synthesis
Robotics
Robot sees obstacles, avoids crashing into them.
Collision-Free Navigation of Mobile Robots via Quadtree-Based Model Predictive Control
Robotics
Helps robots move safely and smartly.
Socially Aware Robot Crowd Navigation via Online Uncertainty-Driven Risk Adaptation
Robotics
Robots learn to walk safely through crowds.