Parallel Tokenizers: Rethinking Vocabulary Design for Cross-Lingual Transfer
By: Muhammad Dehan Al Kautsar, Fajri Koto
Potential Business Impact:
Helps computers understand many languages better.
Tokenization defines the foundation of multilingual language models by determining how words are represented and shared across languages. However, existing methods often fail to support effective cross-lingual transfer because semantically equivalent words are assigned distinct embeddings. For example, "I eat rice" in English and "Ina cin shinkafa" in Hausa are typically mapped to different vocabulary indices, preventing shared representations and limiting cross-lingual generalization. We introduce parallel tokenizers. This new framework trains tokenizers monolingually and then aligns their vocabularies exhaustively using bilingual dictionaries or word-to-word translation, ensuring consistent indices for semantically equivalent words. This alignment enforces a shared semantic space across languages while naturally improving fertility balance. To assess their effectiveness, we pretrain a transformer encoder from scratch on thirteen low-resource languages and evaluate it on sentiment analysis, hate speech detection, emotion classification, and sentence embedding similarity. Across all tasks, models trained with parallel tokenizers outperform conventional multilingual baselines, confirming that rethinking tokenization is essential for advancing multilingual representation learning--especially in low-resource settings.
Similar Papers
Dictionaries to the Rescue: Cross-Lingual Vocabulary Transfer for Low-Resource Languages Using Bilingual Dictionaries
Computation and Language
Teaches computers new languages using dictionaries.
TokAlign: Efficient Vocabulary Adaptation via Token Alignment
Computation and Language
Helps computers learn new languages faster.
False Friends Are Not Foes: Investigating Vocabulary Overlap in Multilingual Language Models
Computation and Language
Shared words help computers learn many languages faster.