zip2zip: Inference-Time Adaptive Vocabularies for Language Models via Token Compression
By: Saibo Geng , Nathan Ranchin , Yunzhen yao and more
Potential Business Impact:
Makes computer language models faster and cheaper.
Tokenization efficiency plays a critical role in the performance and cost of large language models (LLMs), yet most models rely on static tokenizers optimized for general-purpose corpora. These tokenizers' fixed vocabularies often fail to adapt to domain- or language-specific inputs, leading to longer token sequences and higher computational costs. We introduce zip2zip, a framework that enables LLMs to dynamically adjust token vocabulary at inference time, allowing for fewer generated tokens and thus faster inference. zip2zip consists of three key components: (1) a tokenizer based on Lempel-Ziv-Welch (LZW) compression that incrementally compresses tokens into reusable "hypertokens" on the fly; (2) an embedding layer that computes embeddings for newly formed hypertokens at runtime; and (3) a causal language modeling variant that trains the model to operate on hypertokenized, compressed sequences. We show that an existing LLM can be zip2zip-fied in 10 GPU-hours via parameter-efficient finetuning. The resulting zip2zip LLMs effectively learn to use hypertokens at inference time, reducing input and output sequence length by 20-60\%, with significant improvements in inference latency.
Similar Papers
OmniZip: Audio-Guided Dynamic Token Compression for Fast Omnimodal Large Language Models
CV and Pattern Recognition
Makes AI understand videos and sounds faster.
Beyond Text Compression: Evaluating Tokenizers Across Scales
Computation and Language
Finds best word-choosers for language AI.
Lossless Token Sequence Compression via Meta-Tokens
Computation and Language
Makes AI understand more with less text.