Score: 0

PACR: Progressively Ascending Confidence Reward for LLM Reasoning

Published: October 25, 2025 | arXiv ID: 2510.22255v1

By: Eunseop Yoon , Hee Suk Yoon , Jaehyun Jang and more

Potential Business Impact:

Helps AI learn faster by rewarding good thinking steps.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reinforcement Learning with Verifiable Rewards (RLVR) has significantly improved LLM reasoning, but its sparse, outcome-based reward provides no guidance for intermediate steps, slowing exploration. We propose Progressively Ascending Confidence Reward (PACR), a dense, model-intrinsic reward computed directly from the model's evolving belief in the correct answer. PACR encodes the inductive bias that, along a well-formed reasoning trajectory, the probability of the ground-truth answer should have a generally ascending trend. We provide empirical and theoretical analysis validating that such an inductive bias constrains the exploration search space to regions richer in logically sound reasoning. We demonstrate that PACR accelerates exploration, reaches reward saturation with fewer trajectories, and yields improvements on multiple benchmarks. Our results suggest that dense, model-intrinsic shaping signals can make RLVR training more effective and reliable.

Page Count
16 pages

Category
Computer Science:
Artificial Intelligence