Score: 0

Bootstrapped Mixed Rewards for RL Post-Training: Injecting Canonical Action Order

Published: December 3, 2025 | arXiv ID: 2512.04277v1

By: Prakhar Gupta, Vaibhav Gupta

Potential Business Impact:

Helps AI solve puzzles faster by learning the right order.

Business Areas:
A/B Testing Data and Analytics

Post-training with reinforcement learning (RL) typically optimizes a single scalar objective and ignores structure in how solutions are produced. We ask whether a scalar hint toward a canonical solver ordering, used only during RL post-training, improves performance even when fine-tuned on randomized solution sequences. On Sudoku, we train a Transformer with standard fine-tuning on randomized solving orders, then post-train it with Group Relative Policy Optimization (GRPO) with two rewards: cell accuracy and an ordering reward that increases when the model's emission order aligns with the solver order. To compare signals cleanly, we combine them via fixed mixtures and use a simple bootstrapped scaling to equalize component magnitudes at initialization. Mixed rewards generally outperform cell-only optimization--the best mixture yields substantially higher test accuracy than the fine-tuned-only model trained on random-order and approaches the fine-tuned-only model trained on solver-order sequences in accuracy. These results suggest that coarse ordering signals can steer RL post-training toward solver-order trajectories without modifying supervised data or architecture.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)