TransLocNet: Cross-Modal Attention for Aerial-Ground Vehicle Localization with Contrastive Learning
By: Phu Pham, Damon Conover, Aniket Bera
Potential Business Impact:
Helps cars find their spot using sky and ground views.
Aerial-ground localization is difficult due to large viewpoint and modality gaps between ground-level LiDAR and overhead imagery. We propose TransLocNet, a cross-modal attention framework that fuses LiDAR geometry with aerial semantic context. LiDAR scans are projected into a bird's-eye-view representation and aligned with aerial features through bidirectional attention, followed by a likelihood map decoder that outputs spatial probability distributions over position and orientation. A contrastive learning module enforces a shared embedding space to improve cross-modal alignment. Experiments on CARLA and KITTI show that TransLocNet outperforms state-of-the-art baselines, reducing localization error by up to 63% and achieving sub-meter, sub-degree accuracy. These results demonstrate that TransLocNet provides robust and generalizable aerial-ground localization in both synthetic and real-world settings.
Similar Papers
Aerial-ground Cross-modal Localization: Dataset, Ground-truth, and Benchmark
Robotics
Helps robots find their way using 3D maps.
Aerial Vision-Language Navigation with a Unified Framework for Spatial, Temporal and Embodied Reasoning
CV and Pattern Recognition
Drones fly themselves using only cameras and words.
Fine-Grained Cross-View Localization via Local Feature Matching and Monocular Depth Priors
CV and Pattern Recognition
Finds your location from a picture.