DCP: Addressing Input Dynamism In Long-Context Training via Dynamic Context Parallelism
By: Chenyu Jiang , Zhenkun Cai , Ye Tian and more
Potential Business Impact:
Makes AI learn faster by sharing work smartly.
Context parallelism has emerged as a key technique to support long-context training, a growing trend in generative AI for modern large models. However, existing context parallel methods rely on static parallelization configurations that overlook the dynamic nature of training data, specifically, the variability in sequence lengths and token relationships (i.e., attention patterns) across samples. As a result, these methods often suffer from unnecessary communication overhead and imbalanced computation. In this paper, we present DCP, a dynamic context parallel training framework that introduces fine-grained blockwise partitioning of both data and computation. By enabling flexible mapping of data and computation blocks to devices, DCP can adapt to varying sequence characteristics, effectively reducing communication and improving memory and computation balance. Micro-benchmarks demonstrate that DCP accelerates attention by 1.19x~2.45x under causal masks and 2.15x~3.77x under sparse attention patterns. Additionally, we observe up to 0.94x~1.16x end-to-end training speed-up for causal masks, and 1.00x~1.46x for sparse masks.
Similar Papers
DYCP: Dynamic Context Pruning for Long-Form Dialogue with LLMs
Computation and Language
Makes chatbots remember more and answer faster.
Optimizing Long-context LLM Serving via Fine-grained Sequence Parallelism
Distributed, Parallel, and Cluster Computing
Makes AI answer questions much faster.
Scaling Generative Recommendations with Context Parallelism on Hierarchical Sequential Transducers
Information Retrieval
Lets recommendation systems remember more user history.