Score: 1

Speculative Sampling via Exponential Races

Published: April 21, 2025 | arXiv ID: 2504.15475v1

By: Szymon Kobus, Deniz Gündüz

Potential Business Impact:

Makes AI write faster by guessing ahead.

Business Areas:
A/B Testing Data and Analytics

Speculative decoding accelerates large language model inference using a smaller draft model. In this paper, we establish a surprising connection between speculative decoding and channel simulation, which aims at simulating a noisy channel using as few bits as possible. This connection allows us to provide an information-theoretic analysis of the speed up that can be achieved by speculative decoding. Leveraging this link, we derive an explicit relation between generation speed-up and the number of tokens $k$ generated by the draft model for large $k$, which serves as an upper bound for all $k$. We also propose a novel speculative decoding method via exponential race ERSD that matches state-of-the-art performance.

Country of Origin
🇬🇧 United Kingdom

Page Count
15 pages

Category
Computer Science:
Computation and Language