FlowSpec: Continuous Pipelined Speculative Decoding for Efficient Distributed LLM Inference
By: Xing Liu , Lizhuo Luo , Ming Tang and more
Potential Business Impact:
Makes smart computer programs run faster on phones.
Distributed inference serves as a promising approach to enabling the inference of large language models (LLMs) at the network edge. It distributes the inference process to multiple devices to ensure that the LLMs can fit into the device memory. Recent pipeline-based approaches have the potential to parallelize communication and computation, which helps reduce inference latency. However, the benefit diminishes when the inference request at the network edge is sparse, where pipeline is typically at low utilization. To enable efficient distributed LLM inference at the edge, we propose \textbf{FlowSpec}, a pipeline-parallel tree-based speculative decoding framework. FlowSpec incorporates three key mechanisms to improve decoding efficiency: 1) score-based step-wise verification prioritizes more important draft tokens to bring earlier accpeted tokens; 2) efficient draft management to prune invalid tokens while maintaining correct causal relationship during verification; 3) dynamic draft expansion strategies to supply high-quality speculative inputs. These techniques work in concert to enhance both pipeline utilization and speculative efficiency. We evaluate FlowSpec on a real-world testbed with other baselines. Experimental results demonstrate that our proposed framework significantly improves inference speed across diverse models and configurations, achieving speedup ratios 1.28$\times$-1.79$\times$ compared to baselines. Our code is publicly available at \href{https://github.com/Leosang-lx/FlowSpec#}{https://github.com/Leosang-lx/FlowSpec\#}
Similar Papers
SpecPipe: Accelerating Pipeline Parallelism-based LLM Inference with Speculative Decoding
Machine Learning (CS)
Makes AI talk faster by guessing words.
PipeSpec: Breaking Stage Dependencies in Hierarchical LLM Decoding
Artificial Intelligence
Makes AI talk and write much faster.
FlexSpec: Frozen Drafts Meet Evolving Targets in Edge-Cloud Collaborative LLM Speculative Decoding
Distributed, Parallel, and Cluster Computing
Lets phones run smart AI without slow internet.