Score: 0

Projected Microbatch Accumulation yields reference-free proximal policy updates for reinforcement learning

Published: January 15, 2026 | arXiv ID: 2601.10498v1

By: Nilin Abrahamsen

Potential Business Impact:

Makes AI learn better and faster.

Business Areas:
Application Performance Management Data and Analytics, Software

This note introduces Projected Microbatch Accumulation (PROMA), a proximal policy update method for large language model fine-tuning. PROMA accumulates policy gradients across microbatches by projecting out sequence-wise gradient components before microbatch aggregation. The projection is applied layer-wise during the backward pass, enabling efficient implementation without additional forward or backward passes. Empirically, PROMA enforces tighter control of local KL divergence than GRPO, resulting in more stable policy learning. Unlike PPO and GRPO, PROMA achieves proximal updates without inducing entropy collapse and does not rely on a reference policy or likelihood-ratio clipping.

Page Count
4 pages

Category
Computer Science:
Machine Learning (CS)