Inflated Excellence or True Performance? Rethinking Medical Diagnostic Benchmarks with Dynamic Evaluation
By: Xiangxu Zhang , Lei Li , Yanyun Zhou and more
Potential Business Impact:
Tests AI doctors on real patient problems.
Medical diagnostics is a high-stakes and complex domain that is critical to patient care. However, current evaluations of large language models (LLMs) are fundamentally misaligned with real-world clinical practice. Most of them rely on static benchmarks derived from public medical exam items, which tend to overestimate model performance and ignore the difference between textbook cases and the ambiguous, varying conditions in the real world. Recent efforts toward dynamic evaluation offer a promising alternative, but their improvements are limited to superficial perturbations and a narrow focus on accuracy. To address these gaps, we propose DyReMe, a dynamic benchmark for medical diagnostics that better reflects real clinical practice. Unlike static exam-style questions, DyReMe generates fresh, consultation-like cases that introduce distractors such as differential diagnoses and common misdiagnosis factors. It also varies expression styles to mimic diverse real-world query habits. Beyond accuracy, DyReMe evaluates LLMs on three additional clinically relevant dimensions: veracity, helpfulness, and consistency. Our experiments demonstrate that this dynamic approach yields more challenging and realistic assessments, revealing significant misalignments between the performance of state-of-the-art LLMs and real clinical practice. These findings highlight the urgent need for evaluation frameworks that better reflect the demands of trustworthy medical diagnostics.
Similar Papers
LLMEval-Med: A Real-world Clinical Benchmark for Medical LLMs with Physician Validation
Computation and Language
Tests AI for doctor-level medical answers.
Beyond the Leaderboard: Rethinking Medical Benchmarks for Large Language Models
Computation and Language
Checks if AI for doctors is safe and real.
ClinDEF: A Dynamic Evaluation Framework for Large Language Models in Clinical Reasoning
Computation and Language
Tests how well AI doctors can figure out illnesses.