Score: 1

BlockBPE: Parallel BPE Tokenization

Published: July 16, 2025 | arXiv ID: 2507.11941v1

By: Amos You

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Makes AI understand words much faster.

Tokenization is a critical preprocessing step in large language model pipelines, yet widely-used implementations remain CPU-bound and suboptimal for batch inference workflows on GPU. We present BlockBPE, a parallel GPU implementation of byte-pair encoding (BPE) that achieves near linear-time complexity under realistic assumptions and is optimized for high-throughput, batch inference. Unlike existing Rust-based tokenizers such as HuggingFace Tokenizers or OpenAI's tiktoken-whose runtimes are dominated by Regex pre-tokenization and exhibit $O(n \log n)$ runtime-BlockBPE eliminates the Regex pre-tokenization which leads to small loss in generation quality, but enables highly parallelized token merges within thread blocks, reducing overall complexity to $O(nd)$ where $d \ll n$. On high-batch inference workloads, BlockBPE achieves up to 2x higher throughput than tiktoken and 2.5x over HuggingFace Tokenizers.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
5 pages

Category
Computer Science:
Computation and Language