Score: 1

zip2zip: Inference-Time Adaptive Vocabularies for Language Models via Token Compression

Published: June 1, 2025 | arXiv ID: 2506.01084v1

By: Saibo Geng , Nathan Ranchin , Yunzhen yao and more

Potential Business Impact:

Makes computer language models faster and cheaper.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Tokenization efficiency plays a critical role in the performance and cost of large language models (LLMs), yet most models rely on static tokenizers optimized for general-purpose corpora. These tokenizers' fixed vocabularies often fail to adapt to domain- or language-specific inputs, leading to longer token sequences and higher computational costs. We introduce zip2zip, a framework that enables LLMs to dynamically adjust token vocabulary at inference time, allowing for fewer generated tokens and thus faster inference. zip2zip consists of three key components: (1) a tokenizer based on Lempel-Ziv-Welch (LZW) compression that incrementally compresses tokens into reusable "hypertokens" on the fly; (2) an embedding layer that computes embeddings for newly formed hypertokens at runtime; and (3) a causal language modeling variant that trains the model to operate on hypertokenized, compressed sequences. We show that an existing LLM can be zip2zip-fied in 10 GPU-hours via parameter-efficient finetuning. The resulting zip2zip LLMs effectively learn to use hypertokens at inference time, reducing input and output sequence length by 20-60\%, with significant improvements in inference latency.

Country of Origin
πŸ‡¨πŸ‡­ πŸ‡ΊπŸ‡Έ πŸ‡«πŸ‡· France, United States, Switzerland

Page Count
27 pages

Category
Computer Science:
Computation and Language