Evaluating Hierarchical Clinical Document Classification Using Reasoning-Based LLMs
By: Akram Mustafa, Usman Naseem, Mostafa Rahimi Azghadi
Potential Business Impact:
Computers can help doctors sort patient illnesses.
This study evaluates how well large language models (LLMs) can classify ICD-10 codes from hospital discharge summaries, a critical but error-prone task in healthcare. Using 1,500 summaries from the MIMIC-IV dataset and focusing on the 10 most frequent ICD-10 codes, the study tested 11 LLMs, including models with and without structured reasoning capabilities. Medical terms were extracted using a clinical NLP tool (cTAKES), and models were prompted in a consistent, coder-like format. None of the models achieved an F1 score above 57%, with performance dropping as code specificity increased. Reasoning-based models generally outperformed non-reasoning ones, with Gemini 2.5 Pro performing best overall. Some codes, such as those related to chronic heart disease, were classified more accurately than others. The findings suggest that while LLMs can assist human coders, they are not yet reliable enough for full automation. Future work should explore hybrid methods, domain-specific model training, and the use of structured clinical data.
Similar Papers
Can Reasoning LLMs Enhance Clinical Document Classification?
Computation and Language
Helps doctors turn notes into sickness codes.
Model selection meets clinical semantics: Optimizing ICD-10-CM prediction via LLM-as-Judge evaluation, redundancy-aware sampling, and section-aware fine-tuning
Artificial Intelligence
Automates doctor's notes into medical codes.
Quantifying the Reasoning Abilities of LLMs on Real-world Clinical Cases
Computation and Language
Tests AI doctors' thinking for better patient care.