An Efficient and Precise Training Data Construction Framework for Process-supervised Reward Model in Mathematical Reasoning
By: Wei Sun , Qianlong Du , Fuwei Cui and more
Potential Business Impact:
Teaches computers to solve math problems better.
Enhancing the mathematical reasoning capabilities of Large Language Models (LLMs) is of great scientific and practical significance. Researchers typically employ process-supervised reward models (PRMs) to guide the reasoning process, effectively improving the models' reasoning abilities. However, existing methods for constructing process supervision training data, such as manual annotation and per-step Monte Carlo estimation, are often costly or suffer from poor quality. To address these challenges, this paper introduces a framework called EpicPRM, which annotates each intermediate reasoning step based on its quantified contribution and uses an adaptive binary search algorithm to enhance both annotation precision and efficiency. Using this approach, we efficiently construct a high-quality process supervision training dataset named Epic50k, consisting of 50k annotated intermediate steps. Compared to other publicly available datasets, the PRM trained on Epic50k demonstrates significantly superior performance. Getting Epic50k at https://github.com/xiaolizh1/EpicPRM.
Similar Papers
Uncertainty-Based Methods for Automated Process Reward Data Construction and Output Aggregation in Mathematical Reasoning
Artificial Intelligence
Teaches computers to solve math problems better.
Efficient Process Reward Model Training via Active Learning
Machine Learning (CS)
Teaches computers to learn faster with less work.
VRPRM: Process Reward Modeling via Visual Reasoning
Machine Learning (CS)
Teaches computers to think better with less data.