Bit-level BPE: Below the byte boundary
By: Sangwhan Moon, Tatsuya Hiraoka, Naoaki Okazaki
Potential Business Impact:
Shrinks computer text to make it faster.
Byte-level fallbacks for subword tokenization have become a common practice in large language models. In particular, it has been demonstrated to be incredibly effective as a pragmatic solution for preventing OOV, especially in the context of larger models. However, breaking a character down to individual bytes significantly increases the sequence length for long-tail tokens in languages such as Chinese, Japanese, and Korean (CJK) and other character-diverse contexts such as emoji. The increased sequence length results in longer computation during both training and inference. In this work, we propose a simple compression technique that reduces the sequence length losslessly.
Similar Papers
Entropy-Driven Pre-Tokenization for Byte-Pair Encoding
Computation and Language
Helps computers understand Chinese words better.
SuperBPE: Space Travel for Language Models
Computation and Language
Makes computers understand words better, faster.
Boundless Byte Pair Encoding: Breaking the Pre-tokenization Barrier
Computation and Language
Makes computers understand words better by merging them.