LRFusionPR: A Polar BEV-Based LiDAR-Radar Fusion Network for Place Recognition
By: Zhangshuo Qi , Luqi Cheng , Zijie Zhou and more
Potential Business Impact:
Helps self-driving cars see in bad weather.
In autonomous driving, place recognition is critical for global localization in GPS-denied environments. LiDAR and radar-based place recognition methods have garnered increasing attention, as LiDAR provides precise ranging, whereas radar excels in adverse weather resilience. However, effectively leveraging LiDAR-radar fusion for place recognition remains challenging. The noisy and sparse nature of radar data limits its potential to further improve recognition accuracy. In addition, heterogeneous radar configurations complicate the development of unified cross-modality fusion frameworks. In this paper, we propose LRFusionPR, which improves recognition accuracy and robustness by fusing LiDAR with either single-chip or scanning radar. Technically, a dual-branch network is proposed to fuse different modalities within the unified polar coordinate bird's eye view (BEV) representation. In the fusion branch, cross-attention is utilized to perform cross-modality feature interactions. The knowledge from the fusion branch is simultaneously transferred to the distillation branch, which takes radar as its only input to further improve the robustness. Ultimately, the descriptors from both branches are concatenated, producing the multimodal global descriptor for place retrieval. Extensive evaluations on multiple datasets demonstrate that our LRFusionPR achieves accurate place recognition, while maintaining robustness under varying weather conditions. Our open-source code will be released at https://github.com/QiZS-BIT/LRFusionPR.
Similar Papers
ForestLPR: LiDAR Place Recognition in Forests Attentioning Multiple BEV Density Images
CV and Pattern Recognition
Helps robots find their way in forests.
A Pseudo Global Fusion Paradigm-Based Cross-View Network for LiDAR-Based Place Recognition
CV and Pattern Recognition
Helps cars find their way without GPS.
UniMPR: A Unified Framework for Multimodal Place Recognition with Arbitrary Sensor Configurations
CV and Pattern Recognition
Helps robots see and know where they are.