TokAlign: Efficient Vocabulary Adaptation via Token Alignment
By: Chong Li, Jiajun Zhang, Chengqing Zong
Potential Business Impact:
Helps computers learn new languages faster.
Tokenization serves as a foundational step for Large Language Models (LLMs) to process text. In new domains or languages, the inefficiency of the tokenizer will slow down the training and generation of LLM. The mismatch in vocabulary also hinders deep knowledge transfer between LLMs like token-level distillation. To mitigate this gap, we propose an efficient method named TokAlign to replace the vocabulary of LLM from the token co-occurrences view, and further transfer the token-level knowledge between models. It first aligns the source vocabulary to the target one by learning a one-to-one mapping matrix for token IDs. Model parameters, including embeddings, are rearranged and progressively fine-tuned for the new vocabulary. Our method significantly improves multilingual text compression rates and vocabulary initialization for LLMs, decreasing the perplexity from 3.4$\text{e}^2$ of strong baseline methods to 1.2$\text{e}^2$ after initialization. Experimental results on models across multiple parameter scales demonstrate the effectiveness and generalization of TokAlign, which costs as few as 5k steps to restore the performance of the vanilla model. After unifying vocabularies between LLMs, token-level distillation can remarkably boost (+4.4% than sentence-level distillation) the base model, costing only 235M tokens.
Similar Papers
Probabilistic Token Alignment for Large Language Model Fusion
Computation and Language
Combines AI brains to make them smarter.
Parallel Tokenizers: Rethinking Vocabulary Design for Cross-Lingual Transfer
Computation and Language
Helps computers understand many languages better.
Lost in Translation? Vocabulary Alignment for Source-Free Domain Adaptation in Open-Vocabulary Semantic Segmentation
CV and Pattern Recognition
Helps computers see and name objects better.