From Solving to Verifying: A Unified Objective for Robust Reasoning in LLMs
By: Xiaoxuan Wang , Bo Liu , Song Jiang and more
Potential Business Impact:
Helps AI check its own thinking better.
The reasoning capabilities of large language models (LLMs) have been significantly improved through reinforcement learning (RL). Nevertheless, LLMs still struggle to consistently verify their own reasoning traces. This raises the research question of how to enhance the self-verification ability of LLMs and whether such an ability can further improve reasoning performance. In this work, we propose GRPO-Verif, an algorithm that jointly optimizes solution generation and self-verification within a unified loss function, with an adjustable hyperparameter controlling the weight of the verification signal. Experimental results demonstrate that our method enhances self-verification capability while maintaining comparable performance in reasoning.
Similar Papers
DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning
Artificial Intelligence
Teaches computers to prove math problems step-by-step.
Veri-R1: Toward Precise and Faithful Claim Verification via Online Reinforcement Learning
Computation and Language
Helps computers check if online stories are true.
When Does Verification Pay Off? A Closer Look at LLMs as Solution Verifiers
Computation and Language
Helps AI learn to check its own answers better.