Score: 0

How Long Is a Piece of String? A Brief Empirical Analysis of Tokenizers

Published: January 16, 2026 | arXiv ID: 2601.11518v1

By: Jonathan Roberts, Kai Han, Samuel Albanie

Potential Business Impact:

Makes computer language understanding more accurate.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Frontier LLMs are increasingly utilised across academia, society and industry. A commonly used unit for comparing models, their inputs and outputs, and estimating inference pricing is the token. In general, tokens are used as a stable currency, assumed to be broadly consistent across tokenizers and contexts, enabling direct comparisons. However, tokenization varies significantly across models and domains of text, making naive interpretation of token counts problematic. We quantify this variation by providing a comprehensive empirical analysis of tokenization, exploring the compression of sequences to tokens across different distributions of textual data. Our analysis challenges commonly held heuristics about token lengths, finding them to be overly simplistic. We hope the insights of our study add clarity and intuition toward tokenization in contemporary LLMs.

Page Count
14 pages

Category
Computer Science:
Computation and Language