Hybrid Reinforcement: When Reward Is Sparse, It's Better to Be Dense
By: Leitian Tao , Ilia Kulikov , Swarnadeep Saha and more
Potential Business Impact:
Teaches computers to solve harder math problems.
Post-training for reasoning of large language models (LLMs) increasingly relies on verifiable rewards: deterministic checkers that provide 0-1 correctness signals. While reliable, such binary feedback is brittle--many tasks admit partially correct or alternative answers that verifiers under-credit, and the resulting all-or-nothing supervision limits learning. Reward models offer richer, continuous feedback, which can serve as a complementary supervisory signal to verifiers. We introduce HERO (Hybrid Ensemble Reward Optimization), a reinforcement learning framework that integrates verifier signals with reward-model scores in a structured way. HERO employs stratified normalization to bound reward-model scores within verifier-defined groups, preserving correctness while refining quality distinctions, and variance-aware weighting to emphasize challenging prompts where dense signals matter most. Across diverse mathematical reasoning benchmarks, HERO consistently outperforms RM-only and verifier-only baselines, with strong gains on both verifiable and hard-to-verify tasks. Our results show that hybrid reward design retains the stability of verifiers while leveraging the nuance of reward models to advance reasoning.
Similar Papers
The Good, The Bad, and The Hybrid: A Reward Structure Showdown in Reasoning Models Training
Machine Learning (CS)
Improves math AI by rewarding good reasoning.
Beyond Monolithic Rewards: A Hybrid and Multi-Aspect Reward Optimization for MLLM Alignment
Artificial Intelligence
Teaches AI to follow instructions better.
Reward Hacking Mitigation using Verifiable Composite Rewards
Machine Learning (CS)
Teaches AI to answer health questions correctly.