Interpretable Generative and Discriminative Learning for Multimodal and Incomplete Clinical Data
By: Albert Belenguer-Llorens , Carlos Sevilla-Salcedo , Janaina Mourao-Miranda and more
Potential Business Impact:
Helps doctors understand sick people better.
Real-world clinical problems are often characterized by multimodal data, usually associated with incomplete views and limited sample sizes in their cohorts, posing significant limitations for machine learning algorithms. In this work, we propose a Bayesian approach designed to efficiently handle these challenges while providing interpretable solutions. Our approach integrates (1) a generative formulation to capture cross-view relationships with a semi-supervised strategy, and (2) a discriminative task-oriented formulation to identify relevant information for specific downstream objectives. This dual generative-discriminative formulation offers both general understanding and task-specific insights; thus, it provides an automatic imputation of the missing views while enabling robust inference across different data sources. The potential of this approach becomes evident when applied to the multimodal clinical data, where our algorithm is able to capture and disentangle the complex interactions among biological, psychological, and sociodemographic modalities.
Similar Papers
A Generative Imputation Method for Multimodal Alzheimer's Disease Diagnosis
Image and Video Processing
Fixes missing brain scan data for better disease detection.
A Semi-supervised Generative Model for Incomplete Multi-view Data Integration with Missing Labels
Machine Learning (CS)
Helps computers learn from incomplete data.
No Modality Left Behind: Dynamic Model Generation for Incomplete Medical Data
CV and Pattern Recognition
Helps doctors diagnose illness with missing scan data.