Score: 1

RLHFSpec: Breaking the Efficiency Bottleneck in RLHF Training via Adaptive Drafting

Published: December 4, 2025 | arXiv ID: 2512.04752v1

By: Siqi Wang , Hailong Yang , Junjie Zhu and more

Potential Business Impact:

Makes AI write answers much faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reinforcement Learning from Human Feedback (RLHF) is an important fine-tuning technique for large language models (LLMs) and comprises three stages: generation, inference, and training. The generation stage generates samples that are then used to infer learnable experiences for training. We observe that the generation stage is the bottleneck of the entire execution process and consider it a key point for optimization. Specifically, we realize the first attempt to integrate speculative decoding into the RLHF generation stage and propose RLHFSpec, an RLHF system that accelerates generation execution with adaptive speculative decoding and sample reallocation. To fully exploit the performance potential provided by speculative decoding, especially dealing with the dynamic workload of the generation stage, RLHFSpec proposes a workload-aware drafting strategy selection mechanism, which selects the near-optimal strategy by jointly considering the verification cost and the number of accepted tokens. Moreover, RLHFSpec also proposes sample reallocation to fully utilize the GPU resources, and optimizes it with an efficient sample migration mechanism. The experimental results show that the RLHFSpec can achieve higher throughput in the generation stage compared to state-of-the-art works. Moreover, due to the effective alleviation of the generation bottleneck, RLHFSpec also shows significant performance speedup in the entire RLHF execution.

Page Count
12 pages

Category
Computer Science:
Machine Learning (CS)