Scaling Generative Recommendations with Context Parallelism on Hierarchical Sequential Transducers
By: Yue Dong , Han Li , Shen Li and more
Potential Business Impact:
Lets recommendation systems remember more user history.
Large-scale recommendation systems are pivotal to process an immense volume of daily user interactions, requiring the effective modeling of high cardinality and heterogeneous features to ensure accurate predictions. In prior work, we introduced Hierarchical Sequential Transducers (HSTU), an attention-based architecture for modeling high cardinality, non-stationary streaming recommendation data, providing good scaling law in the generative recommender framework (GR). Recent studies and experiments demonstrate that attending to longer user history sequences yields significant metric improvements. However, scaling sequence length is activation-heavy, necessitating parallelism solutions to effectively shard activation memory. In transformer-based LLMs, context parallelism (CP) is a commonly used technique that distributes computation along the sequence-length dimension across multiple GPUs, effectively reducing memory usage from attention activations. In contrast, production ranking models typically utilize jagged input tensors to represent user interaction features, introducing unique CP implementation challenges. In this work, we introduce context parallelism with jagged tensor support for HSTU attention, establishing foundational capabilities for scaling up sequence dimensions. Our approach enables a 5.3x increase in supported user interaction sequence length, while achieving a 1.55x scaling factor when combined with Distributed Data Parallelism (DDP).
Similar Papers
Massive Memorization with Hundreds of Trillions of Parameters for Sequential Transducer Generative Recommenders
Information Retrieval
Makes online suggestions faster with long histories.
Optimizing Long-context LLM Serving via Fine-grained Sequence Parallelism
Distributed, Parallel, and Cluster Computing
Makes AI answer questions much faster.
Leveraging Historical and Current Interests for Continual Sequential Recommendation
Information Retrieval
Keeps online shopping suggestions smart over time.