Score: 2

Efficient Speculative Decoding for Llama at Scale: Challenges and Solutions

Published: August 11, 2025 | arXiv ID: 2508.08192v1

By: Bangsheng Tang , Carl Chengyan Fu , Fei Kou and more

BigTech Affiliations: Meta

Potential Business Impact:

Makes AI talk much faster.

Speculative decoding is a standard method for accelerating the inference speed of large language models. However, scaling it for production environments poses several engineering challenges, including efficiently implementing different operations (e.g., tree attention and multi-round speculative decoding) on GPU. In this paper, we detail the training and inference optimization techniques that we have implemented to enable EAGLE-based speculative decoding at a production scale for Llama models. With these changes, we achieve a new state-of-the-art inference latency for Llama models. For example, Llama4 Maverick decodes at a speed of about 4 ms per token (with a batch size of one) on 8 NVIDIA H100 GPUs, which is 10% faster than the previously best known method. Furthermore, for EAGLE-based speculative decoding, our optimizations enable us to achieve a speed-up for large batch sizes between 1.4x and 2.0x at production scale.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
15 pages

Category
Computer Science:
Computation and Language