Fast Inference via Hierarchical Speculative Decoding
By: Amir Globerson , Haim Kaplan , Yishay Mansour and more
Potential Business Impact:
Makes AI write faster by checking its work.
Transformer language models generate text autoregressively, making inference latency proportional to the number of tokens generated. Speculative decoding reduces this latency without sacrificing output quality, by leveraging a small draft model to propose tokens that the larger target model verifies in parallel. In practice, however, there may exist a set of potential draft models- ranging from faster but less inaccurate, to slower yet more reliable. We introduce Hierarchical Speculative Decoding (HSD), an algorithm that stacks these draft models into a hierarchy, where each model proposes tokens, and the next larger model verifies them in a single forward pass, until finally the target model verifies tokens. We derive an expression for the expected latency of any such hierarchy and show that selecting the latency-optimal hierarchy can be done in polynomial time. Empirically, HSD gives up to 1.2x speed-up over the best single-draft baseline, demonstrating the practicality of our algorithm in reducing generation latency beyond previous techniques.
Similar Papers
Fast Inference via Hierarchical Speculative Decoding
Machine Learning (CS)
Makes AI write stories much faster.
3-Model Speculative Decoding
Computation and Language
Makes AI talk faster by using a team of helpers.
Speculative Decoding in Decentralized LLM Inference: Turning Communication Latency into Computation Throughput
Distributed, Parallel, and Cluster Computing
Makes AI talk faster when shared.