Score: 2

Learning Contrastive Multimodal Fusion with Improved Modality Dropout for Disease Detection and Prediction

Published: September 22, 2025 | arXiv ID: 2509.18284v1

By: Yi Gu, Kuniaki Saito, Jiaxin Ma

Potential Business Impact:

Helps doctors diagnose sickness even with missing info.

Business Areas:
A/B Testing Data and Analytics

As medical diagnoses increasingly leverage multimodal data, machine learning models are expected to effectively fuse heterogeneous information while remaining robust to missing modalities. In this work, we propose a novel multimodal learning framework that integrates enhanced modalities dropout and contrastive learning to address real-world limitations such as modality imbalance and missingness. Our approach introduces learnable modality tokens for improving missingness-aware fusion of modalities and augments conventional unimodal contrastive objectives with fused multimodal representations. We validate our framework on large-scale clinical datasets for disease detection and prediction tasks, encompassing both visual and tabular modalities. Experimental results demonstrate that our method achieves state-of-the-art performance, particularly in challenging and practical scenarios where only a single modality is available. Furthermore, we show its adaptability through successful integration with a recent CT foundation model. Our findings highlight the effectiveness, efficiency, and generalizability of our approach for multimodal learning, offering a scalable, low-cost solution with significant potential for real-world clinical applications. The code is available at https://github.com/omron-sinicx/medical-modality-dropout.

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition