Score: 0

DreamPRM-Code: Function-as-Step Process Reward Model with Label Correction for LLM Coding

Published: December 17, 2025 | arXiv ID: 2512.15000v1

By: Ruiyi Zhang , Peijia Qin , Qi Cao and more

Potential Business Impact:

Helps computers write better code by breaking it down.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Process Reward Models (PRMs) have become essential for improving Large Language Models (LLMs) via test-time scaling, yet their effectiveness in coding remains limited due to the lack of meaningful step decompositions in code and the noise of Monte-Carlo-generated partial labels. We propose DreamPRM-Code, a coding-focused PRM that treats functions as reasoning steps using a Chain-of-Function prompting strategy to induce modular code generation, enabling PRM training and application analogous to mathematical reasoning tasks. To address label noise, DreamPRM-Code introduces a meta-learning-based correction mechanism that leverages clean final-solution unit-test labels and performs bi-level optimization to refine intermediate labels. Applying on test-time scaling, DreamPRM-Code achieved state-of-the-art performance on LiveCodeBench with 80.9 pass@1 rate, surpassing OpenAI o4-mini.

Country of Origin
🇺🇸 United States

Page Count
6 pages

Category
Computer Science:
Machine Learning (CS)