Robust Multimodal Representation Learning in Healthcare
By: Xiaoguang Zhu , Linxiao Gong , Lianlong Sun and more
Potential Business Impact:
Fixes sick patient predictions from mixed data.
Medical multimodal representation learning aims to integrate heterogeneous data into unified patient representations to support clinical outcome prediction. However, real-world medical datasets commonly contain systematic biases from multiple sources, which poses significant challenges for medical multimodal representation learning. Existing approaches typically focus on effective multimodal fusion, neglecting inherent biased features that affect the generalization ability. To address these challenges, we propose a Dual-Stream Feature Decorrelation Framework that identifies and handles the biases through structural causal analysis introduced by latent confounders. Our method employs a causal-biased decorrelation framework with dual-stream neural networks to disentangle causal features from spurious correlations, utilizing generalized cross-entropy loss and mutual information minimization for effective decorrelation. The framework is model-agnostic and can be integrated into existing medical multimodal learning methods. Comprehensive experiments on MIMIC-IV, eICU, and ADNI datasets demonstrate consistent performance improvements.
Similar Papers
Causal Debiasing Medical Multimodal Representation Learning with Missing Modalities
Machine Learning (CS)
Fixes medical AI when data is missing.
Causal Representation Learning from Multimodal Clinical Records under Non-Random Modality Missingness
Machine Learning (CS)
Helps doctors predict patient health better.
Improving Hospital Risk Prediction with Knowledge-Augmented Multimodal EHR Modeling
Machine Learning (CS)
Predicts patient risks more accurately from records