PDTrim: Targeted Pruning for Prefill-Decode Disaggregation in Inference
By: Hao Zhang , Mengsi Lyu , Zhuo Chen and more
Potential Business Impact:
Makes AI models smaller and faster.
Large Language Models (LLMs) demonstrate exceptional capabilities across various tasks, but their deployment is constrained by high computational and memory costs. Model pruning provides an effective means to alleviate these demands. However, existing methods often ignore the characteristics of prefill-decode (PD) disaggregation in practice. In this paper, we propose a novel pruning method for PD disaggregation inference, enabling more precise and efficient block and KV Cache pruning. Our approach constructs pruning and distillation sets to perform iterative block removal independently for the prefill and decode stages, obtaining better pruning solutions. Moreover, we introduce a token-aware cache pruning mechanism that retains all KV Cache in the prefill stage but selectively reuses entries for the first and last token sequences in selected layers during decode, reducing communication costs with minimal overhead. Extensive experiments demonstrate that our approach consistently achieves strong performance in both PD disaggregation and PD unified settings without disaggregation. Under the same (default) settings, our method achieves improved performance and faster inference, along with a 4.95$\times$ reduction in data transmission bandwidth consumption.
Similar Papers
Prefill-Decode Aggregation or Disaggregation? Unifying Both for Goodput-Optimized LLM Serving
Distributed, Parallel, and Cluster Computing
Boosts AI chat speed by 77% for balanced delays
A Dynamic PD-Disaggregation Architecture for Maximizing Goodput in LLM Inference Serving
Distributed, Parallel, and Cluster Computing
Makes AI answer questions faster and more reliably.
Disaggregated Prefill and Decoding Inference System for Large Language Model Serving on Multi-Vendor GPUs
Distributed, Parallel, and Cluster Computing
Makes AI run faster on different computers.