Score: 3

Beyond Text Compression: Evaluating Tokenizers Across Scales

Published: June 3, 2025 | arXiv ID: 2506.03101v1

By: Jonas F. Lotz , António V. Lopes , Stephan Peitz and more

BigTech Affiliations: Apple

Potential Business Impact:

Finds best word-choosers for language AI.

Business Areas:
Text Analytics Data and Analytics, Software

The choice of tokenizer can profoundly impact language model performance, yet accessible and reliable evaluations of tokenizer quality remain an open challenge. Inspired by scaling consistency, we show that smaller models can accurately predict significant differences in tokenizer impact on larger models at a fraction of the compute cost. By systematically evaluating both English-centric and multilingual tokenizers, we find that tokenizer choice has negligible effects on tasks in English but results in consistent performance differences in multilingual settings. We propose new intrinsic tokenizer metrics inspired by Zipf's law that correlate more strongly with downstream performance than text compression when modeling unseen languages. By combining several metrics to capture multiple aspects of tokenizer behavior, we develop a reliable framework for intrinsic tokenizer evaluations. Our work offers a more efficient path to informed tokenizer selection in future language model development.

Country of Origin
🇺🇸 🇩🇰 Denmark, United States

Repos / Data Links

Page Count
19 pages

Category
Computer Science:
Computation and Language