How Long Is a Piece of String? A Brief Empirical Analysis of Tokenizers
By: Jonathan Roberts, Kai Han, Samuel Albanie
Potential Business Impact:
Makes computer language understanding more accurate.
Frontier LLMs are increasingly utilised across academia, society and industry. A commonly used unit for comparing models, their inputs and outputs, and estimating inference pricing is the token. In general, tokens are used as a stable currency, assumed to be broadly consistent across tokenizers and contexts, enabling direct comparisons. However, tokenization varies significantly across models and domains of text, making naive interpretation of token counts problematic. We quantify this variation by providing a comprehensive empirical analysis of tokenization, exploring the compression of sequences to tokens across different distributions of textual data. Our analysis challenges commonly held heuristics about token lengths, finding them to be overly simplistic. We hope the insights of our study add clarity and intuition toward tokenization in contemporary LLMs.
Similar Papers
An Information-Theoretic Perspective on LLM Tokenizers
Information Theory
Makes AI understand words better and faster.
Broken Words, Broken Performance: Effect of Tokenization on Performance of LLMs
Computation and Language
Makes computers understand words better.
The Art of Breaking Words: Rethinking Multilingual Tokenizer Design
Computation and Language
Makes computers understand many languages faster.