Score: 0

Evaluating Hierarchical Clinical Document Classification Using Reasoning-Based LLMs

Published: July 2, 2025 | arXiv ID: 2507.03001v1

By: Akram Mustafa, Usman Naseem, Mostafa Rahimi Azghadi

Potential Business Impact:

Computers can help doctors sort patient illnesses.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This study evaluates how well large language models (LLMs) can classify ICD-10 codes from hospital discharge summaries, a critical but error-prone task in healthcare. Using 1,500 summaries from the MIMIC-IV dataset and focusing on the 10 most frequent ICD-10 codes, the study tested 11 LLMs, including models with and without structured reasoning capabilities. Medical terms were extracted using a clinical NLP tool (cTAKES), and models were prompted in a consistent, coder-like format. None of the models achieved an F1 score above 57%, with performance dropping as code specificity increased. Reasoning-based models generally outperformed non-reasoning ones, with Gemini 2.5 Pro performing best overall. Some codes, such as those related to chronic heart disease, were classified more accurately than others. The findings suggest that while LLMs can assist human coders, they are not yet reliable enough for full automation. Future work should explore hybrid methods, domain-specific model training, and the use of structured clinical data.

Country of Origin
🇦🇺 Australia

Page Count
33 pages

Category
Computer Science:
Computation and Language