ReSpec: Towards Optimizing Speculative Decoding in Reinforcement Learning Systems
By: Qiaoling Chen , Zijun Liu , Peng Sun and more
Potential Business Impact:
Makes AI learn much faster and better.
Adapting large language models (LLMs) via reinforcement learning (RL) is often bottlenecked by the generation stage, which can consume over 75\% of the training time. Speculative decoding (SD) accelerates autoregressive generation in serving systems, but its behavior under RL training remains largely unexplored. We identify three critical gaps that hinder the naive integration of SD into RL systems: diminishing speedups at large batch sizes, drafter staleness under continual actor updates, and drafter-induced policy degradation. To address these gaps, we present ReSpec, a system that adapts SD to RL through three complementary mechanisms: dynamically tuning SD configurations, evolving the drafter via knowledge distillation, and weighting updates by rollout rewards. On Qwen models (3B--14B), ReSpec achieves up to 4.5x speedup while preserving reward convergence and training stability, providing a practical solution for efficient RL-based LLM adaptation.
Similar Papers
When, What, and How: Rethinking Retrieval-Enhanced Speculative Decoding
Computation and Language
Makes AI write faster without losing quality.
Beat the long tail: Distribution-Aware Speculative Decoding for RL Training
Machine Learning (CS)
Speeds up AI learning by predicting future words faster.
Scaling LLM Speculative Decoding: Non-Autoregressive Forecasting in Large-Batch Scenarios
Computation and Language
Makes AI write faster without wasting power.