Steering Pretrained Drafters during Speculative Decoding
By: Frédéric Berdoz, Peer Rheinboldt, Roger Wattenhofer
Potential Business Impact:
Makes AI write faster and better.
Speculative decoding accelerates language model inference by separating generation into fast drafting and parallel verification. Its main limitation is drafter-verifier misalignment, which limits token acceptance and reduces overall effectiveness. While small drafting heads trained from scratch compensate with speed, they struggle when verification dominates latency or when inputs are out of distribution. In contrast, pretrained drafters, though slower, achieve higher acceptance rates thanks to stronger standalone generation capabilities, making them competitive when drafting latency is negligible relative to verification or communication overhead. In this work, we aim to improve the acceptance rates of pretrained drafters by introducing a lightweight dynamic alignment mechanism: a steering vector computed from the verifier's hidden states and injected into the pretrained drafter. Compared to existing offline alignment methods such as distillation, our approach boosts the number of accepted tokens by up to 35\% under standard sampling and 22\% under greedy sampling, all while incurring negligible computational overhead. Importantly, our approach can be retrofitted to existing architectures and pretrained models, enabling rapid adoption.
Similar Papers
Draft, Verify, and Improve: Toward Training-Aware Speculative Decoding
Machine Learning (CS)
Makes AI write faster without needing more training.
Judge Decoding: Faster Speculative Sampling Requires Going Beyond Model Alignment
Machine Learning (CS)
Makes AI talk faster by judging answers better.
Not-a-Bandit: Provably No-Regret Drafter Selection in Speculative Decoding for LLMs
Machine Learning (CS)
Makes AI write faster and smarter.