BlockBPE: Parallel BPE Tokenization
By: Amos You
Potential Business Impact:
Makes AI understand words much faster.
Tokenization is a critical preprocessing step in large language model pipelines, yet widely-used implementations remain CPU-bound and suboptimal for batch inference workflows on GPU. We present BlockBPE, a parallel GPU implementation of byte-pair encoding (BPE) that achieves near linear-time complexity under realistic assumptions and is optimized for high-throughput, batch inference. Unlike existing Rust-based tokenizers such as HuggingFace Tokenizers or OpenAI's tiktoken-whose runtimes are dominated by Regex pre-tokenization and exhibit $O(n \log n)$ runtime-BlockBPE eliminates the Regex pre-tokenization which leads to small loss in generation quality, but enables highly parallelized token merges within thread blocks, reducing overall complexity to $O(nd)$ where $d \ll n$. On high-batch inference workloads, BlockBPE achieves up to 2x higher throughput than tiktoken and 2.5x over HuggingFace Tokenizers.
Similar Papers
Parity-Aware Byte-Pair Encoding: Improving Cross-lingual Fairness in Tokenization
Computation and Language
Makes computer language tools fair for all languages.
Parity-Aware Byte-Pair Encoding: Improving Cross-lingual Fairness in Tokenization
Computation and Language
Helps computers understand all languages equally.
Boundless Byte Pair Encoding: Breaking the Pre-tokenization Barrier
Computation and Language
Makes computers understand words better by merging them.