Language Modeling with Learned Meta-Tokens
By: Alok N. Shah , Khush Gupta , Keshav Ramji and more
Potential Business Impact:
Helps computers remember more words for better understanding.
While modern Transformer-based language models (LMs) have achieved major success in multi-task generalization, they often struggle to capture long-range dependencies within their context window. This work introduces a novel approach using meta-tokens, special tokens injected during pre-training, along with a dedicated meta-attention mechanism to guide LMs to use these tokens. We pre-train a language model with a modified GPT-2 architecture equipped with meta-attention in addition to causal multi-head attention, and study the impact of these tokens on a suite of synthetic tasks. We find that data-efficient language model pre-training on fewer than 100B tokens utilizing meta-tokens and our meta-attention mechanism achieves strong performance on these tasks after fine-tuning. We suggest that these gains arise due to the meta-tokens sharpening the positional encoding. This enables them to operate as trainable, content-based landmarks, implicitly compressing preceding context and "caching" it in the meta-token. At inference-time, the meta-token points to relevant context, facilitating length generalization up to 2$\times$ its context window, even after extension with YaRN. We provide further evidence of these behaviors by visualizing model internals to study the residual stream, and assessing the compression quality by information-theoretic analysis on the rate-distortion tradeoff. Our findings suggest that pre-training LMs with meta-tokens offers a simple, data-efficient method to enhance long-context language modeling performance, while introducing new insights into the nature of their behavior towards length generalization.
Similar Papers
Enhancing Latent Computation in Transformers with Latent Tokens
Machine Learning (CS)
Makes AI smarter at understanding new things.
Lossless Token Sequence Compression via Meta-Tokens
Computation and Language
Makes AI understand more with less text.
Thinking Augmented Pre-training
Computation and Language
Teaches computers to think step-by-step for better learning.