RADAR: Accelerating Large Language Model Inference With RL-Based Dynamic Draft Trees
By: Junjie Ma, Jinlong Li
Inference with modern Large Language Models (LLMs) is expensive and slow, and speculative sampling has emerged as an effective solution to this problem, however, the number of the calls to the draft model for generating candidate tokens in speculative sampling is a preset hyperparameter, lacking flexibility. To generate and utilize the candidate tokens more effectively, we propose RADAR, a novel speculative sampling method with RL-based dynamic draft trees. RADAR formulates the draft tree generation process as a Markov Decision Process (MDP) and employs offline reinforcement learning to train a prediction model, which enables real-time decision on the calls to the draft model, reducing redundant computations and further accelerating inference. Evaluations across three LLMs and four tasks show that RADAR achieves a speedup of 3.17x-4.82x over the auto-regressive decoding baseline. The code is available at https://github.com/minaduki-sora/RADAR.
Similar Papers
Taming the Long-Tail: Efficient Reasoning RL Training with Adaptive Drafter
Machine Learning (CS)
Trains smart AI faster and cheaper.
Beat the long tail: Distribution-Aware Speculative Decoding for RL Training
Machine Learning (CS)
Speeds up AI learning by predicting future words faster.
RASD: Retrieval-Augmented Speculative Decoding
Computation and Language
Makes AI write answers much faster.