Temporal Tokenization Strategies for Event Sequence Modeling with Large Language Models
By: Zefang Liu , Nam Nguyen , Yinzhu Quan and more
Potential Business Impact:
Helps computers understand time in events better.
Representing continuous time is a critical and under-explored challenge in modeling temporal event sequences with large language models (LLMs). Various strategies like byte-level representations or calendar tokens have been proposed. However, the optimal approach remains unclear, especially given the diverse statistical distributions of real-world event data, which range from smooth log-normal to discrete, spiky patterns. This paper presents the first empirical study of temporal tokenization for event sequences, comparing distinct encoding strategies: naive numeric strings, high-precision byte-level representations, human-semantic calendar tokens, classic uniform binning, and adaptive residual scalar quantization. We evaluate these strategies by fine-tuning LLMs on real-world datasets that exemplify these diverse distributions. Our analysis reveals that no single strategy is universally superior; instead, prediction performance depends heavily on aligning the tokenizer with the data's statistical properties, with log-based strategies excelling on skewed distributions and human-centric formats proving robust for mixed modalities.
Similar Papers
Rethinking Tokenization for Clinical Time Series: When Less is More
Machine Learning (CS)
Makes AI better at reading patient health records.
Innovative tokenisation of structured data for LLM training
Machine Learning (CS)
Turns messy data into neat lists for smart computers.
From Values to Tokens: An LLM-Driven Framework for Context-aware Time Series Forecasting via Symbolic Discretization
Machine Learning (CS)
Predicts future events by turning numbers into words.