Temporal Entailment Pretraining for Clinical Language Models over EHR Data
By: Tatsunori Tanaka , Fi Zheng , Kai Sato and more
Potential Business Impact:
Helps doctors predict patient health changes over time.
Clinical language models have achieved strong performance on downstream tasks by pretraining on domain specific corpora such as discharge summaries and medical notes. However, most approaches treat the electronic health record as a static document, neglecting the temporally-evolving and causally entwined nature of patient trajectories. In this paper, we introduce a novel temporal entailment pretraining objective for language models in the clinical domain. Our method formulates EHR segments as temporally ordered sentence pairs and trains the model to determine whether a later state is entailed by, contradictory to, or neutral with respect to an earlier state. Through this temporally structured pretraining task, models learn to perform latent clinical reasoning over time, improving their ability to generalize across forecasting and diagnosis tasks. We pretrain on a large corpus derived from MIMIC IV and demonstrate state of the art results on temporal clinical QA, early warning prediction, and disease progression modeling.
Similar Papers
Early Risk Prediction with Temporally and Contextually Grounded Clinical Language Processing
Computation and Language
Finds sickness risks early from doctor notes.
TIMER: Temporal Instruction Modeling and Evaluation for Longitudinal Clinical Records
Artificial Intelligence
Helps computers understand patient health history over time.
Building Patient Journeys in Hebrew: A Language Model for Clinical Timeline Extraction
Computation and Language
Helps doctors understand patient health history faster.