Score: 1

Revisiting the MIMIC-IV Benchmark: Experiments Using Language Models for Electronic Health Records

Published: April 29, 2025 | arXiv ID: 2504.20547v1

By: Jesus Lovon , Thouria Ben-Haddi , Jules Di Scala and more

Potential Business Impact:

Helps doctors understand patient records better.

Business Areas:
Electronic Health Record (EHR) Health Care

The lack of standardized evaluation benchmarks in the medical domain for text inputs can be a barrier to widely adopting and leveraging the potential of natural language models for health-related downstream tasks. This paper revisited an openly available MIMIC-IV benchmark for electronic health records (EHRs) to address this issue. First, we integrate the MIMIC-IV data within the Hugging Face datasets library to allow an easy share and use of this collection. Second, we investigate the application of templates to convert EHR tabular data to text. Experiments using fine-tuned and zero-shot LLMs on the mortality of patients task show that fine-tuned text-based models are competitive against robust tabular classifiers. In contrast, zero-shot LLMs struggle to leverage EHR representations. This study underlines the potential of text-based approaches in the medical field and highlights areas for further improvement.

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Computation and Language