A Parameter-Efficient Mixture-of-Experts Framework for Cross-Modal Geo-Localization
By: LinFeng Li , Jian Zhao , Zepeng Yang and more
Potential Business Impact:
Drones find places using words and pictures.
We present a winning solution to RoboSense 2025 Track 4: Cross-Modal Drone Navigation. The task retrieves the most relevant geo-referenced image from a large multi-platform corpus (satellite/drone/ground) given a natural-language query. Two obstacles are severe inter-platform heterogeneity and a domain gap between generic training descriptions and platform-specific test queries. We mitigate these with a domain-aligned preprocessing pipeline and a Mixture-of-Experts (MoE) framework: (i) platform-wise partitioning, satellite augmentation, and removal of orientation words; (ii) an LLM-based caption refinement pipeline to align textual semantics with the distinct visual characteristics of each platform. Using BGE-M3 (text) and EVA-CLIP (image), we train three platform experts using a progressive two-stage, hard-negative mining strategy to enhance discriminative power, and fuse their scores at inference. The system tops the official leaderboard, demonstrating robust cross-modal geo-localization under heterogeneous viewpoints.
Similar Papers
SkyMoE: A Vision-Language Foundation Model for Enhancing Geospatial Interpretation with Mixture of Experts
CV and Pattern Recognition
Helps satellites understand Earth better from space.
SMGeo: Cross-View Object Geo-Localization with Grid-Level Mixture-of-Experts
CV and Pattern Recognition
Find objects in satellite photos from drone pictures.
WeatherPrompt: Multi-modality Representation Learning for All-Weather Drone Visual Geo-Localization
CV and Pattern Recognition
Helps drones see where they are in bad weather.