Representation Space Constrained Learning with Modality Decoupling for Multimodal Object Detection
By: YiKang Shao, Tao Shi
Potential Business Impact:
Helps computers see better using different senses.
Multimodal object detection has attracted significant attention in both academia and industry for its enhanced robustness. Although numerous studies have focused on improving modality fusion strategies, most neglect fusion degradation, and none provide a theoretical analysis of its underlying causes. To fill this gap, this paper presents a systematic theoretical investigation of fusion degradation in multimodal detection and identifies two key optimization deficiencies: (1) the gradients of unimodal branch backbones are severely suppressed under multimodal architectures, resulting in under-optimization of the unimodal branches; (2) disparities in modality quality cause weaker modalities to experience stronger gradient suppression, which in turn results in imbalanced modality learning. To address these issues, this paper proposes a Representation Space Constrained Learning with Modality Decoupling (RSC-MD) method, which consists of two modules. The RSC module and the MD module are designed to respectively amplify the suppressed gradients and eliminate inter-modality coupling interference as well as modality imbalance, thereby enabling the comprehensive optimization of each modality-specific backbone. Extensive experiments conducted on the FLIR, LLVIP, M3FD, and MFAD datasets demonstrate that the proposed method effectively alleviates fusion degradation and achieves state-of-the-art performance across multiple benchmarks. The code and training procedures will be released at https://github.com/yikangshao/RSC-MD.
Similar Papers
Learning Representation and Synergy Invariances: A Povable Framework for Generalized Multimodal Face Anti-Spoofing
CV and Pattern Recognition
Keeps fake faces from fooling face scanners.
Modality-Collaborative Low-Rank Decomposers for Few-Shot Video Domain Adaptation
CV and Pattern Recognition
Helps computers learn from few video examples.
Dual-level Modality Debiasing Learning for Unsupervised Visible-Infrared Person Re-Identification
CV and Pattern Recognition
Helps cameras see the same person in different light.