TROLL: Trust Regions improve Reinforcement Learning for Large Language Models
By: Philipp Becker , Niklas Freymuth , Serge Thilges and more
Potential Business Impact:
Makes AI learn faster and better.
On-policy Reinforcement Learning (RL) with PPO-like clip objectives has become the standard choice for reward-based fine-tuning of large language models (LLMs). Although recent work has explored improved estimators of advantages and normalization, the clipping mechanism itself has remained untouched. Originally introduced as a proxy for principled KL-based trust regions, clipping is a crude approximation that often causes unstable updates and suboptimal performance. We replace the clip objective with a novel discrete differentiable trust region projection, which provides principled token-level KL constraints. The projection operates on a sparse subset of the model's most important token logits to balance computational cost and projection effectiveness. Our approach, Trust Region Optimization for Large Language Models (TROLL), serves as a direct replacement for PPO-like clipping during training and does not alter the model's inference behavior. Across datasets, model families, and advantage-estimation methods, TROLL consistently outperforms PPO-like clipping in terms of training speed, stability, and final success rates.
Similar Papers
Trust Region Masking for Long-Horizon LLM Reinforcement Learning
Machine Learning (CS)
Helps AI learn better for longer tasks.
Trust Region Preference Approximation: A simple and stable reinforcement learning algorithm for LLM reasoning
Machine Learning (CS)
Makes AI smarter and safer by learning from choices.
Trust-Region Adaptive Policy Optimization
Machine Learning (CS)
Teaches computers to solve math problems better.