FP8-RL: A Practical and Stable Low-Precision Stack for LLM Reinforcement Learning
By: Zhaopeng Qiu , Shuang Yu , Jingqi Zhang and more
Potential Business Impact:
Makes AI faster and use less memory.
Reinforcement learning (RL) for large language models (LLMs) is increasingly bottlenecked by rollout (generation), where long output sequence lengths make attention and KV-cache memory dominate end-to-end step time. FP8 offers an attractive lever for accelerating RL by reducing compute cost and memory traffic during rollout, but applying FP8 in RL introduces unique engineering and algorithmic challenges: policy weights change every step (requiring repeated quantization and weight synchronization into the inference engine) and low-precision rollouts can deviate from the higher-precision policy assumed by the trainer, causing train-inference mismatch and potential instability. This report presents a practical FP8 rollout stack for LLM RL, implemented in the veRL ecosystem with support for common training backends (e.g., FSDP/Megatron-LM) and inference engines (e.g., vLLM/SGLang). We (i) enable FP8 W8A8 linear-layer rollout using blockwise FP8 quantization, (ii) extend FP8 to KV-cache to remove long-context memory bottlenecks via per-step QKV scale recalibration, and (iii) mitigate mismatch using importance-sampling-based rollout correction (token-level TIS/MIS variants). Across dense and MoE models, these techniques deliver up to 44% rollout throughput gains while preserving learning behavior comparable to BF16 baselines.
Similar Papers
Jet-RL: Enabling On-Policy FP8 Reinforcement Learning with Unified Training and Rollout Precision Flow
Machine Learning (CS)
Makes AI learn faster and more efficiently.
Towards Fully FP8 GEMM LLM Training at Scale
Machine Learning (CS)
Trains big computer brains faster and better.
QeRL: Beyond Efficiency -- Quantization-enhanced Reinforcement Learning for LLMs
Machine Learning (CS)
Makes smart computer programs learn faster, cheaper.