MFAF: An EVA02-Based Multi-scale Frequency Attention Fusion Method for Cross-View Geo-Localization
By: YiTong Liu, TianZhu Liu, YanFeng GU
Potential Business Impact:
Finds a drone's location from its pictures.
Cross-view geo-localization aims to determine the geographical location of a query image by matching it against a gallery of images. This task is challenging due to the significant appearance variations of objects observed from variable views, along with the difficulty in extracting discriminative features. Existing approaches often rely on extracting features through feature map segmentation while neglecting spatial and semantic information. To address these issues, we propose the EVA02-based Multi-scale Frequency Attention Fusion (MFAF) method. The MFAF method consists of Multi-Frequency Branch-wise Block (MFB) and the Frequency-aware Spatial Attention (FSA) module. The MFB block effectively captures both low-frequency structural features and high-frequency edge details across multiple scales, improving the consistency and robustness of feature representations across various viewpoints. Meanwhile, the FSA module adaptively focuses on the key regions of frequency features, significantly mitigating the interference caused by background noise and viewpoint variability. Extensive experiments on widely recognized benchmarks, including University-1652, SUES-200, and Dense-UAV, demonstrate that the MFAF method achieves competitive performance in both drone localization and drone navigation tasks.
Similar Papers
MAFNet:Multi-frequency Adaptive Fusion Network for Real-time Stereo Matching
CV and Pattern Recognition
Makes 3D vision work fast on phones.
Towards a Generalizable Fusion Architecture for Multimodal Object Detection
CV and Pattern Recognition
Helps cameras see better in fog and dark.
A Spatial-Frequency Aware Multi-Scale Fusion Network for Real-Time Deepfake Detection
CV and Pattern Recognition
Finds fake videos fast, even on phones.