Scaling Instruction-Tuned LLMs to Million-Token Contexts via Hierarchical Synthetic Data Generation
By: Linda He , Jue Wang , Maurice Weber and more
Potential Business Impact:
Makes computers understand much longer stories.
Large Language Models (LLMs) struggle with long-context reasoning, not only due to the quadratic scaling of computational complexity with sequence length but also because of the scarcity and expense of annotating long-context data. There has been barely any open-source work that systematically ablates long-context data, nor is there any openly available instruction tuning dataset with contexts surpassing 100K tokens. To bridge this gap, we introduce a novel post-training synthetic data generation strategy designed to efficiently extend the context window of LLMs while preserving their general task performance. Our approach scalably extends to arbitrarily long context lengths, unconstrained by the length of available real-world data, which effectively addresses the scarcity of raw long-context data. Through a step-by-step rotary position embedding (RoPE) scaling training strategy, we demonstrate that our model, with a context length of up to 1M tokens, performs well on the RULER benchmark and InfiniteBench and maintains robust performance on general language tasks.
Similar Papers
From 128K to 4M: Efficient Training of Ultra-Long Context Large Language Models
Computation and Language
Lets computers understand much longer stories.
UltraLLaDA: Scaling the Context Length to 128K for Diffusion Large Language Models
Computation and Language
Lets AI understand much longer stories.
LongLLaDA: Unlocking Long Context Capabilities in Diffusion LLMs
Computation and Language
Lets AI remember more of long stories.