Broken Words, Broken Performance: Effect of Tokenization on Performance of LLMs
By: Sachin Pawar , Manoj Apte , Kshitij Jadhav and more
Potential Business Impact:
Makes computers understand words better.
Tokenization is the first step in training any Large Language Model (LLM), where the text is split into a sequence of tokens as per the model's fixed vocabulary. This tokenization in LLMs is different from the traditional tokenization in NLP where the text is split into a sequence of "natural" words. In LLMs, a natural word may also be broken into multiple tokens due to limited vocabulary size of the LLMs (e.g., Mistral's tokenizer splits "martial" into "mart" and "ial"). In this paper, we hypothesize that such breaking of natural words negatively impacts LLM performance on various NLP tasks. To quantify this effect, we propose a set of penalty functions that compute a tokenization penalty for a given text for a specific LLM, indicating how "bad" the tokenization is. We establish statistical significance of our hypothesis on multiple NLP tasks for a set of different LLMs.
Similar Papers
The Art of Breaking Words: Rethinking Multilingual Tokenizer Design
Computation and Language
Makes computers understand many languages faster.
Tokenization is Sensitive to Language Variation
Computation and Language
Helps computers understand different English writing styles.
TokenBreak: Bypassing Text Classification Models Through Token Manipulation
Machine Learning (CS)
Bypasses text filters, tricking computers into accepting bad input.