PACR: Progressively Ascending Confidence Reward for LLM Reasoning
By: Eunseop Yoon , Hee Suk Yoon , Jaehyun Jang and more
Potential Business Impact:
Helps AI learn faster by rewarding good thinking steps.
Reinforcement Learning with Verifiable Rewards (RLVR) has significantly improved LLM reasoning, but its sparse, outcome-based reward provides no guidance for intermediate steps, slowing exploration. We propose Progressively Ascending Confidence Reward (PACR), a dense, model-intrinsic reward computed directly from the model's evolving belief in the correct answer. PACR encodes the inductive bias that, along a well-formed reasoning trajectory, the probability of the ground-truth answer should have a generally ascending trend. We provide empirical and theoretical analysis validating that such an inductive bias constrains the exploration search space to regions richer in logically sound reasoning. We demonstrate that PACR accelerates exploration, reaches reward saturation with fewer trajectories, and yields improvements on multiple benchmarks. Our results suggest that dense, model-intrinsic shaping signals can make RLVR training more effective and reliable.
Similar Papers
Implicit Actor Critic Coupling via a Supervised Learning Framework for RLVR
Computation and Language
Helps computers solve math problems better.
Reinforcement Learning with Verifiable Rewards Implicitly Incentivizes Correct Reasoning in Base LLMs
Artificial Intelligence
Makes AI think more logically, not just guess.
ICPO: Intrinsic Confidence-Driven Group Relative Preference Optimization for Efficient Reinforcement Learning
Artificial Intelligence
Makes AI think better and avoid mistakes.