Score: 0

Nightjar: Dynamic Adaptive Speculative Decoding for Large Language Models Serving

Published: December 27, 2025 | arXiv ID: 2512.22420v1

By: Rui Li , Zhaoning Zhang , Libo Zhang and more

Potential Business Impact:

Makes AI answer faster by guessing better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Speculative decoding (SD) accelerates LLM inference by verifying draft tokens in parallel. However, this method presents a critical trade-off: it improves throughput in low-load, memory-bound systems but degrades performance in high-load, compute-bound environments due to verification overhead. Current SD implementations use a fixed speculative length, failing to adapt to dynamic request rates and creating a significant performance bottleneck in real-world serving scenarios. To overcome this, we propose Nightjar, a novel learning-based algorithm for adaptive speculative inference that adjusts to request load by dynamically selecting the optimal speculative length for different batch sizes and even disabling speculative decoding when it provides no benefit. Experiments show that Nightjar achieves up to 14.8% higher throughput and 20.2% lower latency compared to standard speculative decoding, demonstrating robust efficiency for real-time serving.

Country of Origin
🇨🇳 China

Page Count
6 pages

Category
Computer Science:
Distributed, Parallel, and Cluster Computing