Score: 3

Multimodal Feature Fusion Network with Text Difference Enhancement for Remote Sensing Change Detection

Published: September 4, 2025 | arXiv ID: 2509.03961v1

By: Yijun Zhou , Yikui Zhai , Zilu Ying and more

Potential Business Impact:

Finds changes in pictures using words too.

Business Areas:
Image Recognition Data and Analytics, Software

Although deep learning has advanced remote sensing change detection (RSCD), most methods rely solely on image modality, limiting feature representation, change pattern modeling, and generalization especially under illumination and noise disturbances. To address this, we propose MMChange, a multimodal RSCD method that combines image and text modalities to enhance accuracy and robustness. An Image Feature Refinement (IFR) module is introduced to highlight key regions and suppress environmental noise. To overcome the semantic limitations of image features, we employ a vision language model (VLM) to generate semantic descriptions of bitemporal images. A Textual Difference Enhancement (TDE) module then captures fine grained semantic shifts, guiding the model toward meaningful changes. To bridge the heterogeneity between modalities, we design an Image Text Feature Fusion (ITFF) module that enables deep cross modal integration. Extensive experiments on LEVIRCD, WHUCD, and SYSUCD demonstrate that MMChange consistently surpasses state of the art methods across multiple metrics, validating its effectiveness for multimodal RSCD. Code is available at: https://github.com/yikuizhai/MMChange.

Country of Origin
πŸ‡²πŸ‡΄ πŸ‡¨πŸ‡³ πŸ‡­πŸ‡° πŸ‡ΊπŸ‡Έ Macao, Hong Kong, United States, China

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition