Score: 0

Evaluating Medical LLMs by Levels of Autonomy: A Survey Moving from Benchmarks to Applications

Published: October 20, 2025 | arXiv ID: 2510.17764v1

By: Xiao Ye , Jacob Dineen , Zhaonan Li and more

Potential Business Impact:

Helps doctors trust AI for patient care.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Medical Large language models achieve strong scores on standard benchmarks; however, the transfer of those results to safe and reliable performance in clinical workflows remains a challenge. This survey reframes evaluation through a levels-of-autonomy lens (L0-L3), spanning informational tools, information transformation and aggregation, decision support, and supervised agents. We align existing benchmarks and metrics with the actions permitted at each level and their associated risks, making the evaluation targets explicit. This motivates a level-conditioned blueprint for selecting metrics, assembling evidence, and reporting claims, alongside directions that link evaluation to oversight. By centering autonomy, the survey moves the field beyond score-based claims toward credible, risk-aware evidence for real clinical use.

Country of Origin
🇺🇸 United States

Page Count
19 pages

Category
Computer Science:
Computation and Language