Score: 0

Towards a Generalizable Fusion Architecture for Multimodal Object Detection

Published: October 20, 2025 | arXiv ID: 2510.17078v1

By: Jad Berjawi, Yoann Dupas, Christophe C'erin

Potential Business Impact:

Helps cameras see better in fog and dark.

Business Areas:
Image Recognition Data and Analytics, Software

Multimodal object detection improves robustness in chal- lenging conditions by leveraging complementary cues from multiple sensor modalities. We introduce Filtered Multi- Modal Cross Attention Fusion (FMCAF), a preprocess- ing architecture designed to enhance the fusion of RGB and infrared (IR) inputs. FMCAF combines a frequency- domain filtering block (Freq-Filter) to suppress redun- dant spectral features with a cross-attention-based fusion module (MCAF) to improve intermodal feature sharing. Unlike approaches tailored to specific datasets, FMCAF aims for generalizability, improving performance across different multimodal challenges without requiring dataset- specific tuning. On LLVIP (low-light pedestrian detec- tion) and VEDAI (aerial vehicle detection), FMCAF outper- forms traditional fusion (concatenation), achieving +13.9% mAP@50 on VEDAI and +1.1% on LLVIP. These results support the potential of FMCAF as a flexible foundation for robust multimodal fusion in future detection pipelines.

Country of Origin
🇫🇷 France

Page Count
9 pages

Category
Computer Science:
CV and Pattern Recognition