SupraTok: Cross-Boundary Tokenization for Enhanced Language Model Performance
By: Andrei-Valentin Tănase, Elena Pelican
Potential Business Impact:
Makes computers understand words better and faster.
Tokenization remains a fundamental yet underexplored bottleneck in natural language processing, with strategies largely static despite remarkable progress in model architectures. We present SupraTok, a novel tokenization architecture that reimagines subword segmentation through three innovations: cross-boundary pattern learning that discovers multi-word semantic units, entropy-driven data curation that optimizes training corpus quality, and multi-phase curriculum learning for stable convergence. Our approach extends Byte-Pair Encoding by learning "superword" tokens, coherent multi-word expressions that preserve semantic unity while maximizing compression efficiency. SupraTok achieves 31% improvement in English tokenization efficiency (5.91 versus 4.51 characters per token) compared to OpenAI's o200k tokenizer and 30% improvement over Google's Gemma 3 tokenizer (256k vocabulary), while maintaining competitive performance across 38 languages. When integrated with a GPT-2 scale model (124M parameters) trained on 10 billion tokens from the FineWeb-Edu dataset, SupraTok yields 8.4% improvement on HellaSWAG and 9.5% on MMLU benchmarks without architectural modifications. While these results are promising at this scale, further validation at larger model scales is needed. These findings suggest that efficient tokenization can complement architectural innovations as a path to improved language model performance.
Similar Papers
SupraTok: Cross-Boundary Tokenization for Enhanced Language Model Performance
Computation and Language
Makes computers understand words better and faster.
IndicSuperTokenizer: An Optimized Tokenizer for Indic Multilingual LLMs
Computation and Language
Makes AI understand many languages better and faster.
Achieving Tokenizer Flexibility in Language Models through Heuristic Adaptation and Supertoken Learning
Computation and Language
Lets computers understand more words faster.