Beat the long tail: Distribution-Aware Speculative Decoding for RL Training
By: Zelei Shao , Vikranth Srivatsa , Sanjana Srivastava and more
Potential Business Impact:
Speeds up AI learning by predicting future words faster.
Reinforcement learning(RL) post-training has become essential for aligning large language models (LLMs), yet its efficiency is increasingly constrained by the rollout phase, where long trajectories are generated token by token. We identify a major bottleneck:the long-tail distribution of rollout lengths, where a small fraction of long generations dominates wall clock time and a complementary opportunity; the availability of historical rollouts that reveal stable prompt level patterns across training epochs. Motivated by these observations, we propose DAS, a Distribution Aware Speculative decoding framework that accelerates RL rollouts without altering model outputs. DAS integrates two key ideas: an adaptive, nonparametric drafter built from recent rollouts using an incrementally maintained suffix tree, and a length aware speculation policy that allocates more aggressive draft budgets to long trajectories that dominate makespan. This design exploits rollout history to sustain acceptance while balancing base and token level costs during decoding. Experiments on math and code reasoning tasks show that DAS reduces rollout time up to 50% while preserving identical training curves, demonstrating that distribution-aware speculative decoding can significantly accelerate RL post training without compromising learning quality.
Similar Papers
Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter
Machine Learning (CS)
Trains smart AI faster and cheaper.
ReSpec: Towards Optimizing Speculative Decoding in Reinforcement Learning Systems
Machine Learning (CS)
Makes AI learn much faster and better.
DSD: A Distributed Speculative Decoding Solution for Edge-Cloud Agile Large Model Serving
Machine Learning (CS)
Makes AI talk faster on many devices.