Score: 0

Training Language Models with homotokens Leads to Delayed Overfitting

Published: January 6, 2026 | arXiv ID: 2601.02867v1

By: Adrian Cosma , Stefan Ruseti , Emilian Radoi and more

Potential Business Impact:

Makes AI understand words better, even with different spellings.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Subword tokenization introduces a computational layer in language models where many distinct token sequences decode to the same surface form and preserve meaning, yet induce different internal computations. Despite this non-uniqueness, language models are typically trained using a single canonical longest-prefix tokenization. We formalize homotokens-alternative valid subword segmentations of the same lexical item-as a strictly meaning-preserving form of data augmentation. We introduce a lightweight training architecture that conditions canonical next-token prediction on sampled homotoken variants via an auxiliary causal encoder and block-causal cross-attention, without modifying the training objective or token interface. In data-constrained pretraining, homotoken augmentation consistently delays overfitting under repeated data exposure and improves generalization across diverse evaluation datasets. In multilingual fine-tuning, we find that the effectiveness of homotokens depends on tokenizer quality: gains are strongest when canonical tokens are highly compressed and diminish when the tokenizer already over-fragments the input. Overall, homotokens provide a simple and modular mechanism for inducing tokenization invariance in language models.

Country of Origin
🇨🇭 Switzerland

Page Count
14 pages

Category
Computer Science:
Computation and Language