How to Evaluate Medical AI
By: Ilia Kopanichuk , Petr Anokhin , Vladimir Shaposhnikov and more
Potential Business Impact:
Makes AI diagnoses as good as doctors.
The integration of artificial intelligence (AI) into medical diagnostic workflows requires robust and consistent evaluation methods to ensure reliability, clinical relevance, and the inherent variability in expert judgments. Traditional metrics like precision and recall often fail to account for the inherent variability in expert judgments, leading to inconsistent assessments of AI performance. Inter-rater agreement statistics like Cohen's Kappa are more reliable but they lack interpretability. We introduce Relative Precision and Recall of Algorithmic Diagnostics (RPAD and RRAD) - a new evaluation metrics that compare AI outputs against multiple expert opinions rather than a single reference. By normalizing performance against inter-expert disagreement, these metrics provide a more stable and realistic measure of the quality of predicted diagnosis. In addition to the comprehensive analysis of diagnostic quality measures, our study contains a very important side result. Our evaluation methodology allows us to avoid selecting diagnoses from a limited list when evaluating a given case. Instead, both the models being tested and the examiners verifying them arrive at a free-form diagnosis. In this automated methodology for establishing the identity of free-form clinical diagnoses, a remarkable 98% accuracy becomes attainable. We evaluate our approach using 360 medical dialogues, comparing multiple large language models (LLMs) against a panel of physicians. Large-scale study shows that top-performing models, such as DeepSeek-V3, achieve consistency on par with or exceeding expert consensus. Moreover, we demonstrate that expert judgments exhibit significant variability - often greater than that between AI and humans. This finding underscores the limitations of any absolute metrics and supports the need to adopt relative metrics in medical AI.
Similar Papers
How to Evaluate Medical AI
Artificial Intelligence
Helps AI doctors agree with human doctors.
RAD: Towards Trustworthy Retrieval-Augmented Multi-modal Clinical Diagnosis
Machine Learning (CS)
Helps doctors diagnose illnesses using AI.
Over-Relying on Reliance: Towards Realistic Evaluations of AI-Based Clinical Decision Support
Human-Computer Interaction
Helps doctors use AI to make better patient choices.