Evaluating Medical LLMs by Levels of Autonomy: A Survey Moving from Benchmarks to Applications
By: Xiao Ye , Jacob Dineen , Zhaonan Li and more
Potential Business Impact:
Helps doctors trust AI for patient care.
Medical Large language models achieve strong scores on standard benchmarks; however, the transfer of those results to safe and reliable performance in clinical workflows remains a challenge. This survey reframes evaluation through a levels-of-autonomy lens (L0-L3), spanning informational tools, information transformation and aggregation, decision support, and supervised agents. We align existing benchmarks and metrics with the actions permitted at each level and their associated risks, making the evaluation targets explicit. This motivates a level-conditioned blueprint for selecting metrics, assembling evidence, and reporting claims, alongside directions that link evaluation to oversight. By centering autonomy, the survey moves the field beyond score-based claims toward credible, risk-aware evidence for real clinical use.
Similar Papers
Beyond the Leaderboard: Rethinking Medical Benchmarks for Large Language Models
Computation and Language
Checks if AI for doctors is safe and real.
LLMEval-Med: A Real-world Clinical Benchmark for Medical LLMs with Physician Validation
Computation and Language
Tests AI for doctor-level medical answers.
Evaluating LLMs Across Multi-Cognitive Levels: From Medical Knowledge Mastery to Scenario-Based Problem Solving
Computation and Language
Tests how well AI understands complex medical problems.