Score: 0

From Solving to Verifying: A Unified Objective for Robust Reasoning in LLMs

Published: November 19, 2025 | arXiv ID: 2511.15137v1

By: Xiaoxuan Wang , Bo Liu , Song Jiang and more

Potential Business Impact:

Helps AI check its own thinking better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The reasoning capabilities of large language models (LLMs) have been significantly improved through reinforcement learning (RL). Nevertheless, LLMs still struggle to consistently verify their own reasoning traces. This raises the research question of how to enhance the self-verification ability of LLMs and whether such an ability can further improve reasoning performance. In this work, we propose GRPO-Verif, an algorithm that jointly optimizes solution generation and self-verification within a unified loss function, with an adjustable hyperparameter controlling the weight of the verification signal. Experimental results demonstrate that our method enhances self-verification capability while maintaining comparable performance in reasoning.

Country of Origin
🇺🇸 United States

Page Count
7 pages

Category
Computer Science:
Machine Learning (CS)