One Patient, Many Contexts: Scaling Medical AI Through Contextual Intelligence
By: Michelle M. Li , Ben Y. Reis , Adam Rodman and more
Potential Business Impact:
AI doctors change how they help based on the patient.
Medical foundation models, including language models trained on clinical notes, vision-language models on medical images, and multimodal models on electronic health records, can summarize clinical notes, answer medical questions, and assist in decision-making. Adapting these models to new populations, specialties, or settings typically requires fine-tuning, careful prompting, or retrieval from knowledge bases. This can be impractical, and limits their ability to interpret unfamiliar inputs and adjust to clinical situations not represented during training. As a result, models are prone to contextual errors, where predictions appear reasonable but fail to account for critical patient-specific or contextual information. These errors stem from a fundamental limitation that current models struggle with: dynamically adjusting their behavior across evolving contexts of medical care. In this Perspective, we outline a vision for context-switching in medical AI: models that dynamically adapt their reasoning without retraining to new specialties, populations, workflows, and clinical roles. We envision context-switching AI to diagnose, manage, and treat a wide range of diseases across specialties and regions, and expand access to medical care.
Similar Papers
Context-aware deep learning using individualized prior information reduces false positives in disease risk prediction and longitudinal health assessment
Artificial Intelligence
Finds cancer earlier by looking at past health.
Beyond Generative AI: World Models for Clinical Prediction, Counterfactuals, and Planning
Machine Learning (CS)
Helps doctors predict patient health and plan treatments.
An N-of-1 Artificial Intelligence Ecosystem for Precision Medicine
Artificial Intelligence
Helps doctors treat each patient uniquely.