Projected Microbatch Accumulation yields reference-free proximal policy updates for reinforcement learning
By: Nilin Abrahamsen
Potential Business Impact:
Makes AI learn better and faster.
This note introduces Projected Microbatch Accumulation (PROMA), a proximal policy update method for large language model fine-tuning. PROMA accumulates policy gradients across microbatches by projecting out sequence-wise gradient components before microbatch aggregation. The projection is applied layer-wise during the backward pass, enabling efficient implementation without additional forward or backward passes. Empirically, PROMA enforces tighter control of local KL divergence than GRPO, resulting in more stable policy learning. Unlike PPO and GRPO, PROMA achieves proximal updates without inducing entropy collapse and does not rely on a reference policy or likelihood-ratio clipping.
Similar Papers
Reinforcement Learning in POMDP's via Direct Gradient Ascent
Machine Learning (CS)
Teaches robots to learn by trying things.
PRISMA: Reinforcement Learning Guided Two-Stage Policy Optimization in Multi-Agent Architecture for Open-Domain Multi-Hop Question Answering
Artificial Intelligence
Helps computers answer hard questions by finding clues.
A-3PO: Accelerating Asynchronous LLM Training with Staleness-aware Proximal Policy Approximation
Machine Learning (CS)
Makes AI learn faster without extra work.