Accelerating Large-Scale Reasoning Model Inference with Sparse Self-Speculative Decoding
By: Yilong Zhao , Jiaming Tang , Kan Zhu and more
Potential Business Impact:
Makes AI answer questions much faster.
Reasoning language models have demonstrated remarkable capabilities on challenging tasks by generating elaborate chain-of-thought (CoT) solutions. However, such lengthy generation shifts the inference bottleneck from compute-bound to memory-bound. To generate each token, the model applies full attention to all previously generated tokens, requiring memory access to an increasingly large KV-Cache. Consequently, longer generations demand more memory access for every step, leading to substantial pressure on memory bandwidth. To address this, we introduce SparseSpec, a speculative decoding framework that reuses the same model as the draft and target models (i.e., self-speculation). SparseSpec features a novel sparse attention mechanism, PillarAttn, as the draft model, which accurately selects critical tokens via elegantly reusing information from the verification stage. Furthermore, SparseSpec co-designs self-speculation with three system innovations: (1) a unified scheduler to batch token drafting and verification, (2) delayed verification for CPU/GPU overlap, and (3) dynamic KV-Cache management to maximize memory utilization. Across various models and datasets, SparseSpec outperforms state-of-the-art solutions, with an up to 2.13x throughput speedup.
Similar Papers
SpecAttn: Speculating Sparse Attention
Computation and Language
Makes AI understand long texts much faster.
SPIRe: Boosting LLM Inference Throughput with Speculative Decoding
Machine Learning (CS)
Makes AI write much faster, even with lots of text.
Scaling LLM Speculative Decoding: Non-Autoregressive Forecasting in Large-Batch Scenarios
Computation and Language
Makes AI write faster without wasting power.