Benchmarking Foundation Models with Multimodal Public Electronic Health Records
By: Kunyu Yu , Rui Yang , Jingchi Liao and more
Potential Business Impact:
Helps doctors understand patient health better.
Foundation models have emerged as a powerful approach for processing electronic health records (EHRs), offering flexibility to handle diverse medical data modalities. In this study, we present a comprehensive benchmark that evaluates the performance, fairness, and interpretability of foundation models, both as unimodal encoders and as multimodal learners, using the publicly available MIMIC-IV database. To support consistent and reproducible evaluation, we developed a standardized data processing pipeline that harmonizes heterogeneous clinical records into an analysis-ready format. We systematically compared eight foundation models, encompassing both unimodal and multimodal models, as well as domain-specific and general-purpose variants. Our findings demonstrate that incorporating multiple data modalities leads to consistent improvements in predictive performance without introducing additional bias. Through this benchmark, we aim to support the development of effective and trustworthy multimodal artificial intelligence (AI) systems for real-world clinical applications. Our code is available at https://github.com/nliulab/MIMIC-Multimodal.
Similar Papers
Multimodal Foundation Models for Early Disease Detection
Machine Learning (CS)
Finds sickness early by combining all patient info.
Foundation models for electronic health records: representation dynamics and transferability
Machine Learning (CS)
Helps doctors predict patient health using shared data.
Generative Foundation Model for Structured and Unstructured Electronic Health Records
Artificial Intelligence
Helps doctors predict sickness and write notes faster.