Revisiting the MIMIC-IV Benchmark: Experiments Using Language Models for Electronic Health Records
By: Jesus Lovon , Thouria Ben-Haddi , Jules Di Scala and more
Potential Business Impact:
Helps doctors understand patient records better.
The lack of standardized evaluation benchmarks in the medical domain for text inputs can be a barrier to widely adopting and leveraging the potential of natural language models for health-related downstream tasks. This paper revisited an openly available MIMIC-IV benchmark for electronic health records (EHRs) to address this issue. First, we integrate the MIMIC-IV data within the Hugging Face datasets library to allow an easy share and use of this collection. Second, we investigate the application of templates to convert EHR tabular data to text. Experiments using fine-tuned and zero-shot LLMs on the mortality of patients task show that fine-tuned text-based models are competitive against robust tabular classifiers. In contrast, zero-shot LLMs struggle to leverage EHR representations. This study underlines the potential of text-based approaches in the medical field and highlights areas for further improvement.
Similar Papers
Evaluating LLM Abilities to Understand Tabular Electronic Health Records: A Comprehensive Study of Patient Data Extraction and Retrieval
Computation and Language
Helps computers find patient information faster.
A Comprehensive Survey of Electronic Health Record Modeling: From Deep Learning Approaches to Large Language Models
Machine Learning (CS)
Helps doctors understand patient health records better.
Plain language adaptations of biomedical text using LLMs: Comparision of evaluation metrics
Computation and Language
Makes doctor's notes easy for anyone to read.