GM-PRM: A Generative Multimodal Process Reward Model for Multimodal Mathematical Reasoning
By: Jianghangfan Zhang , Yibo Yan , Kening Zheng and more
Potential Business Impact:
Fixes math problems by explaining each step.
Multimodal Large Language Models (MLLMs) demonstrate remarkable capabilities but often struggle with complex, multi-step mathematical reasoning, where minor errors in visual perception or logical deduction can lead to complete failure. While Process Reward Models (PRMs) offer step-by-step supervision, existing multimodal PRMs are limited to being binary verifiers that can identify but not correct errors, offering little explanatory power. To address these deficiencies, we introduce the Generative Multimodal Process Reward Model (GM-PRM), a novel paradigm that transforms the PRM from a passive judge into an active reasoning collaborator. Instead of a simple scalar score, GM-PRM provides a fine-grained, interpretable analysis of each reasoning step, evaluating its step intent, visual alignment, and logical soundness. More critically, GM-PRM is trained to generate a corrected version of the first erroneous step it identifies. This unique corrective capability enables our new test-time inference strategy, Refined Best-of-N (Refined-BoN). This framework actively enhances solution quality by using the PRM's generated correction to guide the policy model toward a more promising reasoning trajectory, thereby improving the diversity and correctness of the solution pool. We demonstrate that GM-PRM achieves state-of-the-art results on multiple multimodal math benchmarks, significantly boosting policy model performance with remarkable data efficiency, requiring only a 20K-sample training dataset. Our code will be released upon acceptance.
Similar Papers
GM-PRM: A Generative Multimodal Process Reward Model for Multimodal Mathematical Reasoning
Computation and Language
Fixes math problems by explaining each step.
VisualPRM: An Effective Process Reward Model for Multimodal Reasoning
CV and Pattern Recognition
Makes AI better at understanding pictures and words.
GenPRM: Scaling Test-Time Compute of Process Reward Models via Generative Reasoning
Computation and Language
Makes AI better at solving math problems.