Learning Contrastive Multimodal Fusion with Improved Modality Dropout for Disease Detection and Prediction
By: Yi Gu, Kuniaki Saito, Jiaxin Ma
Potential Business Impact:
Helps doctors diagnose sickness even with missing info.
As medical diagnoses increasingly leverage multimodal data, machine learning models are expected to effectively fuse heterogeneous information while remaining robust to missing modalities. In this work, we propose a novel multimodal learning framework that integrates enhanced modalities dropout and contrastive learning to address real-world limitations such as modality imbalance and missingness. Our approach introduces learnable modality tokens for improving missingness-aware fusion of modalities and augments conventional unimodal contrastive objectives with fused multimodal representations. We validate our framework on large-scale clinical datasets for disease detection and prediction tasks, encompassing both visual and tabular modalities. Experimental results demonstrate that our method achieves state-of-the-art performance, particularly in challenging and practical scenarios where only a single modality is available. Furthermore, we show its adaptability through successful integration with a recent CT foundation model. Our findings highlight the effectiveness, efficiency, and generalizability of our approach for multimodal learning, offering a scalable, low-cost solution with significant potential for real-world clinical applications. The code is available at https://github.com/omron-sinicx/medical-modality-dropout.
Similar Papers
What are You Looking at? Modality Contribution in Multimodal Medical Deep Learning Methods
CV and Pattern Recognition
Shows how AI uses different patient data.
Causal Debiasing Medical Multimodal Representation Learning with Missing Modalities
Machine Learning (CS)
Fixes medical AI when data is missing.
Multimodal Medical Image Classification via Synergistic Learning Pre-training
CV and Pattern Recognition
Helps doctors diagnose sickness from many image types.