FG$^2$: Fine-Grained Cross-View Localization by Fine-Grained Feature Matching
By: Zimin Xia, Alexandre Alahi
Potential Business Impact:
Helps cameras know their exact spot from above.
We propose a novel fine-grained cross-view localization method that estimates the 3 Degrees of Freedom pose of a ground-level image in an aerial image of the surroundings by matching fine-grained features between the two images. The pose is estimated by aligning a point plane generated from the ground image with a point plane sampled from the aerial image. To generate the ground points, we first map ground image features to a 3D point cloud. Our method then learns to select features along the height dimension to pool the 3D points to a Bird's-Eye-View (BEV) plane. This selection enables us to trace which feature in the ground image contributes to the BEV representation. Next, we sample a set of sparse matches from computed point correspondences between the two point planes and compute their relative pose using Procrustes alignment. Compared to the previous state-of-the-art, our method reduces the mean localization error by 28% on the VIGOR cross-area test set. Qualitative results show that our method learns semantically consistent matches across ground and aerial views through weakly supervised learning from the camera pose.
Similar Papers
Fine-Grained Cross-View Localization via Local Feature Matching and Monocular Depth Priors
CV and Pattern Recognition
Finds your location from a picture.
Revisiting Cross-View Localization from Image Matching
CV and Pattern Recognition
Helps cameras find places from the sky.
Aerial-Ground Image Feature Matching via 3D Gaussian Splatting-based Intermediate View Rendering
CV and Pattern Recognition
Creates better 3D maps from different camera views.