Score: 1

Temporal Tokenization Strategies for Event Sequence Modeling with Large Language Models

Published: December 15, 2025 | arXiv ID: 2512.13618v1

By: Zefang Liu , Nam Nguyen , Yinzhu Quan and more

Potential Business Impact:

Helps computers understand time in events better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Representing continuous time is a critical and under-explored challenge in modeling temporal event sequences with large language models (LLMs). Various strategies like byte-level representations or calendar tokens have been proposed. However, the optimal approach remains unclear, especially given the diverse statistical distributions of real-world event data, which range from smooth log-normal to discrete, spiky patterns. This paper presents the first empirical study of temporal tokenization for event sequences, comparing distinct encoding strategies: naive numeric strings, high-precision byte-level representations, human-semantic calendar tokens, classic uniform binning, and adaptive residual scalar quantization. We evaluate these strategies by fine-tuning LLMs on real-world datasets that exemplify these diverse distributions. Our analysis reveals that no single strategy is universally superior; instead, prediction performance depends heavily on aligning the tokenizer with the data's statistical properties, with log-based strategies excelling on skewed distributions and human-centric formats proving robust for mixed modalities.

Repos / Data Links

Page Count
10 pages

Category
Computer Science:
Computation and Language