UltraLLaDA: Scaling the Context Length to 128K for Diffusion Large Language Models
By: Guangxin He , Shen Nie , Fengqi Zhu and more
Potential Business Impact:
Lets AI understand much longer stories.
Diffusion LLMs have attracted growing interest, with plenty of recent work emphasizing their great potential in various downstream tasks; yet the long-context behavior of diffusion LLMs remains largely uncharted. We present a case study of post-training techniques for extending the context window of diffusion LLMs (i.e., LLaDA) without retraining from scratch. We show that a simple modification to the standard Rotary Positional Embeddings (RoPE) extension effectively accommodates the probabilistic modeling inherent in the diffusion process, enabling stable scaling to longer context ranges. We further compare masking strategies used during post-training and analyze their impact on optimization stability and long-range recall. Instantiating these insights, we introduce UltraLLaDA, a diffusion LLM with a 128K-token context window that, in our empirical evaluation on long-context tasks, significantly outperforms training-free baselines. Our experimental results highlight the special positional extension as a key lever for scaling diffusion LLMs to extended contexts and offer practical guidance for practitioners seeking 128K-scale context via efficient post-training.
Similar Papers
LongLLaDA: Unlocking Long Context Capabilities in Diffusion LLMs
Computation and Language
Lets AI remember more of long stories.
Scaling Instruction-Tuned LLMs to Million-Token Contexts via Hierarchical Synthetic Data Generation
Computation and Language
Makes computers understand much longer stories.
From 128K to 4M: Efficient Training of Ultra-Long Context Large Language Models
Computation and Language
Lets computers understand much longer stories.