Score: 1

PDTrim: Targeted Pruning for Prefill-Decode Disaggregation in Inference

Published: August 29, 2025 | arXiv ID: 2509.04467v2

By: Hao Zhang , Mengsi Lyu , Zhuo Chen and more

Potential Business Impact:

Makes AI models smaller and faster.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) demonstrate exceptional capabilities across various tasks, but their deployment is constrained by high computational and memory costs. Model pruning provides an effective means to alleviate these demands. However, existing methods often ignore the characteristics of prefill-decode (PD) disaggregation in practice. In this paper, we propose a novel pruning method for PD disaggregation inference, enabling more precise and efficient block and KV Cache pruning. Our approach constructs pruning and distillation sets to perform iterative block removal independently for the prefill and decode stages, obtaining better pruning solutions. Moreover, we introduce a token-aware cache pruning mechanism that retains all KV Cache in the prefill stage but selectively reuses entries for the first and last token sequences in selected layers during decode, reducing communication costs with minimal overhead. Extensive experiments demonstrate that our approach consistently achieves strong performance in both PD disaggregation and PD unified settings without disaggregation. Under the same (default) settings, our method achieves improved performance and faster inference, along with a 4.95$\times$ reduction in data transmission bandwidth consumption.

Page Count
23 pages

Category
Computer Science:
Computation and Language