Score: 2

PRISM: A Unified Framework for Post-Training LLMs Without Verifiable Rewards

Published: January 8, 2026 | arXiv ID: 2601.04700v1

By: Mukesh Ghimire , Aosong Feng , Liwen You and more

BigTech Affiliations: Amazon

Potential Business Impact:

Teaches computers to learn better without answers.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Current techniques for post-training Large Language Models (LLMs) rely either on costly human supervision or on external verifiers to boost performance on tasks such as mathematical reasoning and code generation. However, as LLMs improve their problem-solving, any further improvement will potentially require high-quality solutions to difficult problems that are not available to humans. As a result, learning from unlabeled data is becoming increasingly attractive in the research community. Existing methods extract learning signal from a model's consistency, either by majority voting or by converting the model's internal confidence into reward. Although internal consistency metric such as entropy or self-certainty require no human intervention, as we show in this work, these are unreliable signals for large-scale and long-term training. To address the unreliability, we propose PRISM, a unified training framework that uses a Process Reward Model (PRM) to guide learning alongside model's internal confidence in the absence of ground-truth labels. We show that effectively combining PRM with self-certainty can lead to both stable training and better test-time performance, and also keep the model's internal confidence in check.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
Computation and Language