Localization Meets Uncertainty: Uncertainty-Aware Multi-Modal Localization
By: Hye-Min Won, Jieun Lee, Jiyong Oh
Potential Business Impact:
Makes robots know exactly where they are.
Reliable localization is critical for robot navigation in complex indoor environments. In this paper, we propose an uncertainty-aware localization method that enhances the reliability of localization outputs without modifying the prediction model itself. This study introduces a percentile-based rejection strategy that filters out unreliable 3-DoF pose predictions based on aleatoric and epistemic uncertainties the network estimates. We apply this approach to a multi-modal end-to-end localization that fuses RGB images and 2D LiDAR data, and we evaluate it across three real-world datasets collected using a commercialized serving robot. Experimental results show that applying stricter uncertainty thresholds consistently improves pose accuracy. Specifically, the mean position error is reduced by 41.0%, 56.7%, and 69.4%, and the mean orientation error by 55.6%, 65.7%, and 73.3%, when applying 90%, 80%, and 70% thresholds, respectively. Furthermore, the rejection strategy effectively removes extreme outliers, resulting in better alignment with ground truth trajectories. To the best of our knowledge, this is the first study to quantitatively demonstrate the benefits of percentile-based uncertainty rejection in multi-modal end-to-end localization tasks. Our approach provides a practical means to enhance the reliability and accuracy of localization systems in real-world deployments.
Similar Papers
Semantic and Feature Guided Uncertainty Quantification of Visual Localization for Autonomous Vehicles
Robotics
Helps self-driving cars see better in bad weather.
Evidential Uncertainty Estimation for Multi-Modal Trajectory Prediction
Robotics
Helps self-driving cars predict where others will go.
Towards Robust LiDAR Localization: Deep Learning-based Uncertainty Estimation
Robotics
Helps robots know where they are better.