Score: 2

Sampling from Your Language Model One Byte at a Time

Published: June 17, 2025 | arXiv ID: 2506.14123v2

By: Jonathan Hayase , Alisa Liu , Noah A. Smith and more

BigTech Affiliations: University of Washington

Potential Business Impact:

Fixes computer language mistakes for better results.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Tokenization is used almost universally by modern language models, enabling efficient text representation using multi-byte or multi-character tokens. However, prior work has shown that tokenization can introduce distortion into the model's generations, an issue known as the Prompt Boundary Problem (PBP). For example, users are often advised not to end their prompts with a space because it prevents the model from including the space as part of the next token. While this heuristic is effective in English, the underlying PBP continues to affect languages such as Chinese as well as code generation, where tokens often do not line up with word and syntactic boundaries. In this work, we present an inference-time method to convert any autoregressive LM with a BPE tokenizer into a character-level or byte-level LM. Our method efficiently solves the PBP and is also able to unify the vocabularies of language models with different tokenizers, allowing one to ensemble LMs with different tokenizers at inference time or transfer the post-training from one model to another using proxy-tuning. We demonstrate in experiments that the ensemble and proxy-tuned models outperform their constituents on downstream evals. Code is available at https://github.com/SewoongLab/byte-sampler .

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
23 pages

Category
Computer Science:
Computation and Language