Global Resolution: Optimal Multi-Draft Speculative Sampling via Convex Minimization
By: Rahul Krishna Thomas, Arka Pal
Potential Business Impact:
Makes AI write faster without losing quality.
Speculative sampling reduces the latency of autoregressive decoding for target model LLMs without sacrificing inference quality, by using a cheap draft model to suggest a candidate token and a verification criterion to accept or resample this token. To improve acceptance and decoding efficiency, recent work has explored the multi-draft extension, where at each step $n$ draft tokens are generated, and the verification criterion is a distribution conditioned on these. When this criterion maximizes the probability of accepting some draft token, it is called the optimal transport (OT). However, finding the OT is difficult, as it is the solution of a linear program (OTLP) in over $V^n$ variables, with $V$ being the vocabulary size. Two recent theoretical works have reframed the OTLP in terms of importance sampling or subset selection. In this work, we prove that these formulations are equivalent to an exponentially large relaxed OTLP, so it remains infeasible to solve. Then, we reverse engineer subset selection to formulate the OTLP as a max-flow problem. With a novel application of polymatroid theory, we reduce the exponentially large OTLP to a convex optimization problem in at most $V$ variables. This allows us to devise an algorithm for optimal $n$-draft speculative sampling when the $n$ tokens are chosen i.i.d. from a single draft model, which can be tuned to arbitrary accuracy. Finally, we measure acceptance rates and algorithm runtimes for various $n$ and top-$k$ draft sampling settings. Our findings give the first multi-draft algorithm with 90% acceptance and under 100 ms of overhead per generated token with negligible deviation from the target model distribution.
Similar Papers
Not-a-Bandit: Provably No-Regret Drafter Selection in Speculative Decoding for LLMs
Machine Learning (CS)
Makes AI write faster and smarter.
Confidence-Modulated Speculative Decoding for Large Language Models
Computation and Language
Makes AI write faster and smarter.
Steering Pretrained Drafters during Speculative Decoding
Machine Learning (CS)
Makes AI write faster and better.