Score: 0

Better Language Model-Based Judging Reward Modeling through Scaling Comprehension Boundaries

Published: August 25, 2025 | arXiv ID: 2508.18212v1

By: Meiling Ning , Zhongbao Zhang , Junda Ye and more

Potential Business Impact:

Makes AI better at judging answers by understanding context.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The emergence of LM-based judging reward modeling, represented by generative reward models, has successfully made reinforcement learning from AI feedback (RLAIF) efficient and scalable. To further advance this paradigm, we propose a core insight: this form of reward modeling shares fundamental formal consistency with natural language inference (NLI), a core task in natural language understanding. This reframed perspective points to a key path for building superior reward models: scaling the model's comprehension boundaries. Pursuing this path, exploratory experiments on NLI tasks demonstrate that the slot prediction masked language models (MLMs) incorporating contextual explanations achieve significantly better performance compared to mainstream autoregressive models. Based on this key finding, we propose ESFP-RM, a two-stage LM-based judging reward model that utilizes an explanation based slot framework for prediction to fully leverage the advantages of MLMs. Extensive experiments demonstrate that in both reinforcement learning from human feedback (RLHF) and out-of-distribution (OOD) scenarios, the ESFP-RM framework delivers more stable and generalizable reward signals compared to generative reward models.

Country of Origin
🇨🇳 China

Page Count
15 pages

Category
Computer Science:
Computation and Language