Semantic and Feature Guided Uncertainty Quantification of Visual Localization for Autonomous Vehicles
By: Qiyuan Wu, Mark Campbell
Potential Business Impact:
Helps self-driving cars see better in bad weather.
The uncertainty quantification of sensor measurements coupled with deep learning networks is crucial for many robotics systems, especially for safety-critical applications such as self-driving cars. This paper develops an uncertainty quantification approach in the context of visual localization for autonomous driving, where locations are selected based on images. Key to our approach is to learn the measurement uncertainty using light-weight sensor error model, which maps both image feature and semantic information to 2-dimensional error distribution. Our approach enables uncertainty estimation conditioned on the specific context of the matched image pair, implicitly capturing other critical, unannotated factors (e.g., city vs highway, dynamic vs static scenes, winter vs summer) in a latent manner. We demonstrate the accuracy of our uncertainty prediction framework using the Ithaca365 dataset, which includes variations in lighting and weather (sunny, night, snowy). Both the uncertainty quantification of the sensor+network is evaluated, along with Bayesian localization filters using unique sensor gating method. Results show that the measurement error does not follow a Gaussian distribution with poor weather and lighting conditions, and is better predicted by our Gaussian Mixture model.
Similar Papers
Localization Meets Uncertainty: Uncertainty-Aware Multi-Modal Localization
Robotics
Makes robots know exactly where they are.
OCCUQ: Exploring Efficient Uncertainty Quantification for 3D Occupancy Prediction
CV and Pattern Recognition
Helps self-driving cars see better in bad weather.
An End-to-End Learning-Based Multi-Sensor Fusion for Autonomous Vehicle Localization
Robotics
Cars know exactly where they are.