Rethinking Tokenization for Clinical Time Series: When Less is More
By: Rafi Al Attrach , Rajna Fani , David Restrepo and more
Tokenization strategies shape how models process electronic health records, yet fair comparisons of their effectiveness remain limited. We present a systematic evaluation of tokenization approaches for clinical time series modeling using transformer-based architectures, revealing task-dependent and sometimes counterintuitive findings about temporal and value feature importance. Through controlled ablations across four clinical prediction tasks on MIMIC-IV, we demonstrate that explicit time encodings provide no consistent statistically significant benefit for the evaluated downstream tasks. Value features show task-dependent importance, affecting mortality prediction but not readmission, suggesting code sequences alone can carry sufficient predictive signal. We further show that frozen pretrained code encoders dramatically outperform their trainable counterparts while requiring dramatically fewer parameters. Larger clinical encoders provide consistent improvements across tasks, benefiting from frozen embeddings that eliminate computational overhead. Our controlled evaluation enables fairer tokenization comparisons and demonstrates that simpler, parameter-efficient approaches can, in many cases, achieve strong performance, though the optimal tokenization strategy remains task-dependent.
Similar Papers
Forecasting from Clinical Textual Time Series: Adaptations of the Encoder and Decoder Language Model Families
Computation and Language
Helps doctors predict patient health from notes.
Deep Learning Approach for Clinical Risk Identification Using Transformer Modeling of Heterogeneous EHR Data
Machine Learning (CS)
Helps doctors predict patient health risks better.
Cross-Representation Benchmarking in Time-Series Electronic Health Records for Clinical Outcome Prediction
Machine Learning (CS)
Helps doctors predict patient health better.