Score: 0

Semantic and Feature Guided Uncertainty Quantification of Visual Localization for Autonomous Vehicles

Published: June 18, 2025 | arXiv ID: 2506.15851v1

By: Qiyuan Wu, Mark Campbell

Potential Business Impact:

Helps self-driving cars see better in bad weather.

Business Areas:
Autonomous Vehicles Transportation

The uncertainty quantification of sensor measurements coupled with deep learning networks is crucial for many robotics systems, especially for safety-critical applications such as self-driving cars. This paper develops an uncertainty quantification approach in the context of visual localization for autonomous driving, where locations are selected based on images. Key to our approach is to learn the measurement uncertainty using light-weight sensor error model, which maps both image feature and semantic information to 2-dimensional error distribution. Our approach enables uncertainty estimation conditioned on the specific context of the matched image pair, implicitly capturing other critical, unannotated factors (e.g., city vs highway, dynamic vs static scenes, winter vs summer) in a latent manner. We demonstrate the accuracy of our uncertainty prediction framework using the Ithaca365 dataset, which includes variations in lighting and weather (sunny, night, snowy). Both the uncertainty quantification of the sensor+network is evaluated, along with Bayesian localization filters using unique sensor gating method. Results show that the measurement error does not follow a Gaussian distribution with poor weather and lighting conditions, and is better predicted by our Gaussian Mixture model.

Country of Origin
🇺🇸 United States

Page Count
7 pages

Category
Computer Science:
Robotics