Sampling from Your Language Model One Byte at a Time
By: Jonathan Hayase , Alisa Liu , Noah A. Smith and more
Potential Business Impact:
Fixes computer language mistakes for better results.
Tokenization is used almost universally by modern language models, enabling efficient text representation using multi-byte or multi-character tokens. However, prior work has shown that tokenization can introduce distortion into the model's generations, an issue known as the Prompt Boundary Problem (PBP). For example, users are often advised not to end their prompts with a space because it prevents the model from including the space as part of the next token. While this heuristic is effective in English, the underlying PBP continues to affect languages such as Chinese as well as code generation, where tokens often do not line up with word and syntactic boundaries. In this work, we present an inference-time method to convert any autoregressive LM with a BPE tokenizer into a character-level or byte-level LM. Our method efficiently solves the PBP and is also able to unify the vocabularies of language models with different tokenizers, allowing one to ensemble LMs with different tokenizers at inference time or transfer the post-training from one model to another using proxy-tuning. We demonstrate in experiments that the ensemble and proxy-tuned models outperform their constituents on downstream evals. Code is available at https://github.com/SewoongLab/byte-sampler .
Similar Papers
From Bytes to Ideas: Language Modeling with Autoregressive U-Nets
Computation and Language
Lets computers understand text at different sizes.
Parity-Aware Byte-Pair Encoding: Improving Cross-lingual Fairness in Tokenization
Computation and Language
Makes computer language tools fair for all languages.
Bit-level BPE: Below the byte boundary
Computation and Language
Shrinks computer text to make it faster.