An End-to-End Learning-Based Multi-Sensor Fusion for Autonomous Vehicle Localization
By: Changhong Lin , Jiarong Lin , Zhiqiang Sui and more
Potential Business Impact:
Cars know exactly where they are.
Multi-sensor fusion is essential for autonomous vehicle localization, as it is capable of integrating data from various sources for enhanced accuracy and reliability. The accuracy of the integrated location and orientation depends on the precision of the uncertainty modeling. Traditional methods of uncertainty modeling typically assume a Gaussian distribution and involve manual heuristic parameter tuning. However, these methods struggle to scale effectively and address long-tail scenarios. To address these challenges, we propose a learning-based method that encodes sensor information using higher-order neural network features, thereby eliminating the need for uncertainty estimation. This method significantly eliminates the need for parameter fine-tuning by developing an end-to-end neural network that is specifically designed for multi-sensor fusion. In our experiments, we demonstrate the effectiveness of our approach in real-world autonomous driving scenarios. Results show that the proposed method outperforms existing multi-sensor fusion methods in terms of both accuracy and robustness. A video of the results can be viewed at https://youtu.be/q4iuobMbjME.
Similar Papers
GaussianFusion: Gaussian-Based Multi-Sensor Fusion for End-to-End Autonomous Driving
Robotics
Helps self-driving cars see and plan better.
Deep Learning-Based Multi-Modal Fusion for Robust Robot Perception and Navigation
Machine Learning (CS)
Helps robots see and move better in tricky places.
Semantic and Feature Guided Uncertainty Quantification of Visual Localization for Autonomous Vehicles
Robotics
Helps self-driving cars see better in bad weather.