Nightjar: Dynamic Adaptive Speculative Decoding for Large Language Models Serving
By: Rui Li , Zhaoning Zhang , Libo Zhang and more
Potential Business Impact:
Makes AI answer faster by guessing better.
Speculative decoding (SD) accelerates LLM inference by verifying draft tokens in parallel. However, this method presents a critical trade-off: it improves throughput in low-load, memory-bound systems but degrades performance in high-load, compute-bound environments due to verification overhead. Current SD implementations use a fixed speculative length, failing to adapt to dynamic request rates and creating a significant performance bottleneck in real-world serving scenarios. To overcome this, we propose Nightjar, a novel learning-based algorithm for adaptive speculative inference that adjusts to request load by dynamically selecting the optimal speculative length for different batch sizes and even disabling speculative decoding when it provides no benefit. Experiments show that Nightjar achieves up to 14.8% higher throughput and 20.2% lower latency compared to standard speculative decoding, demonstrating robust efficiency for real-time serving.
Similar Papers
Speculative Decoding in Decentralized LLM Inference: Turning Communication Latency into Computation Throughput
Distributed, Parallel, and Cluster Computing
Makes AI talk faster when shared.
AdaSD: Adaptive Speculative Decoding for Efficient Language Model Inference
Computation and Language
Makes AI talk faster without losing its smarts.
Efficient LLM Inference over Heterogeneous Edge Networks with Speculative Decoding
Systems and Control
Makes AI answer questions much faster.