SpecPipe: Accelerating Pipeline Parallelism-based LLM Inference with Speculative Decoding
By: Haofei Yin , Mengbai Xiao , Tinghong Li and more
Potential Business Impact:
Makes AI talk faster by guessing words.
The demand for large language model inference is rapidly increasing. Pipeline parallelism offers a cost-effective deployment strategy for distributed inference but suffers from high service latency. While incorporating speculative decoding to pipeline parallelism improves performance, it still faces challenges of low hardware utilization and narrow speculative window. Inspired by branch prediction in instruction pipelining, we introduce SpecPipe, which fills the pipeline with speculative tokens of a request step-by-step. By maximizing the hardware utilization, SpecPipe decodes one token per pipeline step ideally. Specifically, SpecPipe comprises a dynamic speculative token tree and a pipelined inference framework. The tree dynamically accepts tokens from a speculative token source and outputs the tokens to the inference pipeline. Since the speculative window relaxed in our framework, a high-accuracy draft model is integrated without fine-tuning. The pipeline inference framework follows node-wise computation, pruning propagation, and inter-node communication stages. We implement SpecPipe and a variant SpecPipe-DB with dynamic batching for single- and multi-request inference, respectively. On an 8-stage pipeline, SpecPipe improves time between tokens on diverse single-request workloads by $4.19\times$-$5.53\times$ over standard pipeline parallelism and by $2.08\times$-$2.38\times$ over prior tree-based speculative decoding methods. For multi-request workloads, SpecPipe-DB achieves $1.64\times$-$2.08\times$ higher throughput and $1.61\times$-$2.06\times$ lower time between tokens than vLLM.
Similar Papers
PipeSpec: Breaking Stage Dependencies in Hierarchical LLM Decoding
Artificial Intelligence
Makes AI talk and write much faster.
FlowSpec: Continuous Pipelined Speculative Decoding for Efficient Distributed LLM Inference
Distributed, Parallel, and Cluster Computing
Makes smart computer programs run faster on phones.
Speculative Decoding via Hybrid Drafting and Rollback-Aware Branch Parallelism
Distributed, Parallel, and Cluster Computing
Makes AI talk much faster by guessing ahead.