MoMA: A Mixture-of-Multimodal-Agents Architecture for Enhancing Clinical Prediction Modelling
By: Jifan Gao , Mahmudur Rahman , John Caskey and more
Potential Business Impact:
Helps doctors predict sickness using all patient info.
Multimodal electronic health record (EHR) data provide richer, complementary insights into patient health compared to single-modality data. However, effectively integrating diverse data modalities for clinical prediction modeling remains challenging due to the substantial data requirements. We introduce a novel architecture, Mixture-of-Multimodal-Agents (MoMA), designed to leverage multiple large language model (LLM) agents for clinical prediction tasks using multimodal EHR data. MoMA employs specialized LLM agents ("specialist agents") to convert non-textual modalities, such as medical images and laboratory results, into structured textual summaries. These summaries, together with clinical notes, are combined by another LLM ("aggregator agent") to generate a unified multimodal summary, which is then used by a third LLM ("predictor agent") to produce clinical predictions. Evaluating MoMA on three prediction tasks using real-world datasets with different modality combinations and prediction settings, MoMA outperforms current state-of-the-art methods, highlighting its enhanced accuracy and flexibility across various tasks.
Similar Papers
MoE-Health: A Mixture of Experts Framework for Robust Multimodal Healthcare Prediction
Machine Learning (CS)
Helps doctors predict sickness with mixed patient data.
Towards Generalized Routing: Model and Agent Orchestration for Adaptive and Efficient Inference
Multiagent Systems
Directs AI questions to the best tool.
Towards Generalized Routing: Model and Agent Orchestration for Adaptive and Efficient Inference
Multiagent Systems
Smartly sends questions to the best AI helper.