LoPT: Lossless Parallel Tokenization Acceleration for Long Context Inference of Large Language Model
By: Wei Shao , Lingchao Zheng , Pengyu Wang and more
Potential Business Impact:
Makes AI understand long texts much faster.
Long context inference scenarios have become increasingly important for large language models, yet they introduce significant computational latency. While prior research has optimized long-sequence inference through operators, model architectures, and system frameworks, tokenization remains an overlooked bottleneck. Existing parallel tokenization methods accelerate processing through text segmentation and multi-process tokenization, but they suffer from inconsistent results due to boundary artifacts that occur after merging. To address this, we propose LoPT, a novel Lossless Parallel Tokenization framework that ensures output identical to standard sequential tokenization. Our approach employs character-position-based matching and dynamic chunk length adjustment to align and merge tokenized segments accurately. Extensive experiments across diverse long-text datasets demonstrate that LoPT achieves significant speedup while guaranteeing lossless tokenization. We also provide theoretical proof of consistency and comprehensive analytical studies to validate the robustness of our method.
Similar Papers
Parallel Token Prediction for Language Models
Computation and Language
Makes computers write sentences much faster.
SemToken: Semantic-Aware Tokenization for Efficient Long-Context Language Modeling
Computation and Language
Makes computers understand words better and faster.
Systematic Evaluation of Optimization Techniques for Long-Context Language Models
Computation and Language
Speeds up AI thinking without losing smarts.