xGR: Efficient Generative Recommendation Serving at Scale
By: Qingxiao Sun , Tongxuan Liu , Shen Zhang and more
Recommendation system delivers substantial economic benefits by providing personalized predictions. Generative recommendation (GR) integrates LLMs to enhance the understanding of long user-item sequences. Despite employing attention-based architectures, GR's workload differs markedly from that of LLM serving. GR typically processes long prompt while producing short, fixed-length outputs, yet the computational cost of each decode phase is especially high due to the large beam width. In addition, since the beam search involves a vast item space, the sorting overhead becomes particularly time-consuming. We propose xGR, a GR-oriented serving system that meets strict low-latency requirements under highconcurrency scenarios. First, xGR unifies the processing of prefill and decode phases through staged computation and separated KV cache. Second, xGR enables early sorting termination and mask-based item filtering with data structure reuse. Third, xGR reconstructs the overall pipeline to exploit multilevel overlap and multi-stream parallelism. Our experiments with real-world recommendation service datasets demonstrate that xGR achieves at least 3.49x throughput compared to the state-of-the-art baseline under strict latency constraints.
Similar Papers
DualGR: Generative Retrieval with Long and Short-Term Interests Modeling
Information Retrieval
Recommends videos users will watch longer.
Multi-Aspect Cross-modal Quantization for Generative Recommendation
Information Retrieval
Helps computers guess what you'll like next.
Multi-Aspect Cross-modal Quantization for Generative Recommendation
Information Retrieval
Helps computers suggest better things by understanding more details.