Towards a Generalizable Fusion Architecture for Multimodal Object Detection
By: Jad Berjawi, Yoann Dupas, Christophe C'erin
Potential Business Impact:
Helps cameras see better in fog and dark.
Multimodal object detection improves robustness in chal- lenging conditions by leveraging complementary cues from multiple sensor modalities. We introduce Filtered Multi- Modal Cross Attention Fusion (FMCAF), a preprocess- ing architecture designed to enhance the fusion of RGB and infrared (IR) inputs. FMCAF combines a frequency- domain filtering block (Freq-Filter) to suppress redun- dant spectral features with a cross-attention-based fusion module (MCAF) to improve intermodal feature sharing. Unlike approaches tailored to specific datasets, FMCAF aims for generalizability, improving performance across different multimodal challenges without requiring dataset- specific tuning. On LLVIP (low-light pedestrian detec- tion) and VEDAI (aerial vehicle detection), FMCAF outper- forms traditional fusion (concatenation), achieving +13.9% mAP@50 on VEDAI and +1.1% on LLVIP. These results support the potential of FMCAF as a flexible foundation for robust multimodal fusion in future detection pipelines.
Similar Papers
FreDFT: Frequency Domain Fusion Transformer for Visible-Infrared Object Detection
CV and Pattern Recognition
Helps cameras see better in bad weather.
Small Lesions-aware Bidirectional Multimodal Multiscale Fusion Network for Lung Disease Classification
CV and Pattern Recognition
Finds tiny sickness spots doctors miss.
DyCAF-Net: Dynamic Class-Aware Fusion Network
CV and Pattern Recognition
Helps computers see objects better, even when hidden.