FLEXITOKENS: Flexible Tokenization for Evolving Language Models
By: Abraham Toluase Owodunni, Orevaoghene Ahia, Sachin Kumar
Potential Business Impact:
Makes computers understand new words better.
Language models (LMs) are challenging to adapt to new data distributions by simple finetuning. This is due to the rigidity of their subword tokenizers, which typically remain unchanged during adaptation. This inflexibility often leads to inefficient tokenization, causing overfragmentation of out-of-distribution domains, unseen languages, or scripts. In this work, we develop byte-level LMs with learnable tokenizers to make tokenization adaptive. Our models include a submodule that learns to predict boundaries between the input byte sequence, encoding it into variable-length segments. Existing tokenizer-free methods train this boundary predictor using an auxiliary loss that enforces a fixed compression rate across the training corpus, introducing a new kind of rigidity. We propose FLEXITOKENS, a simplified training objective that enables significantly greater flexibility during adaptation. Evaluating across multiple multilingual benchmarks, morphologically diverse tasks, and domains, we demonstrate that FLEXITOKENS consistently reduces token over-fragmentation and achieves up to 10% improvements on downstream task performance compared to subword and other gradient-based tokenizers. Code and data for our experiments will be released at https://github.com/owos/flexitokens
Similar Papers
Achieving Tokenizer Flexibility in Language Models through Heuristic Adaptation and Supertoken Learning
Computation and Language
Lets computers understand more words faster.
TokenFLEX: Unified VLM Training for Flexible Visual Tokens Inference
CV and Pattern Recognition
Lets computers understand pictures better, faster.
One Tokenizer To Rule Them All: Emergent Language Plasticity via Multilingual Tokenizers
Computation and Language
Helps computers learn many new languages faster.