PMPGuard: Catching Pseudo-Matched Pairs in Remote Sensing Image-Text Retrieval
By: Pengxiang Ouyang , Qing Ma , Zheng Wang and more
Remote sensing (RS) image-text retrieval faces significant challenges in real-world datasets due to the presence of Pseudo-Matched Pairs (PMPs), semantically mismatched or weakly aligned image-text pairs, which hinder the learning of reliable cross-modal alignments. To address this issue, we propose a novel retrieval framework that leverages Cross-Modal Gated Attention and a Positive-Negative Awareness Attention mechanism to mitigate the impact of such noisy associations. The gated module dynamically regulates cross-modal information flow, while the awareness mechanism explicitly distinguishes informative (positive) cues from misleading (negative) ones during alignment learning. Extensive experiments on three benchmark RS datasets, i.e., RSICD, RSITMD, and RS5M, demonstrate that our method consistently achieves state-of-the-art performance, highlighting its robustness and effectiveness in handling real-world mismatches and PMPs in RS image-text retrieval tasks.
Similar Papers
Beyond Pixels: A Training-Free, Text-to-Text Framework for Remote Sensing Image Retrieval
CV and Pattern Recognition
Find satellite pictures using words, no training needed.
A Vision Centric Remote Sensing Benchmark
CV and Pattern Recognition
Helps computers understand satellite pictures better.
Beyond Artificial Misalignment: Detecting and Grounding Semantic-Coordinated Multimodal Manipulations
CV and Pattern Recognition
Finds fake pictures with matching fake stories.