Score: 1

TransLocNet: Cross-Modal Attention for Aerial-Ground Vehicle Localization with Contrastive Learning

Published: December 11, 2025 | arXiv ID: 2512.10419v1

By: Phu Pham, Damon Conover, Aniket Bera

Potential Business Impact:

Helps cars find their spot using sky and ground views.

Business Areas:
Autonomous Vehicles Transportation

Aerial-ground localization is difficult due to large viewpoint and modality gaps between ground-level LiDAR and overhead imagery. We propose TransLocNet, a cross-modal attention framework that fuses LiDAR geometry with aerial semantic context. LiDAR scans are projected into a bird's-eye-view representation and aligned with aerial features through bidirectional attention, followed by a likelihood map decoder that outputs spatial probability distributions over position and orientation. A contrastive learning module enforces a shared embedding space to improve cross-modal alignment. Experiments on CARLA and KITTI show that TransLocNet outperforms state-of-the-art baselines, reducing localization error by up to 63% and achieving sub-meter, sub-degree accuracy. These results demonstrate that TransLocNet provides robust and generalizable aerial-ground localization in both synthetic and real-world settings.

Country of Origin
🇺🇸 United States

Page Count
8 pages

Category
Computer Science:
CV and Pattern Recognition