Score: 0

A Systematic Analysis of Base Model Choice for Reward Modeling

Published: May 16, 2025 | arXiv ID: 2505.10775v1

By: Kian Ahrabian , Pegah Jandaghi , Negar Mokhberian and more

Potential Business Impact:

Improves AI writing by picking the best starting AI.

Business Areas:
A/B Testing Data and Analytics

Reinforcement learning from human feedback (RLHF) and, at its core, reward modeling have become a crucial part of training powerful large language models (LLMs). One commonly overlooked factor in training high-quality reward models (RMs) is the effect of the base model, which is becoming more challenging to choose given the rapidly growing pool of LLMs. In this work, we present a systematic analysis of the effect of base model selection on reward modeling performance. Our results show that the performance can be improved by up to 14% compared to the most common (i.e., default) choice. Moreover, we showcase the strong statistical relation between some existing benchmarks and downstream performances. We also demonstrate that the results from a small set of benchmarks could be combined to boost the model selection ($+$18% on average in the top 5-10). Lastly, we illustrate the impact of different post-training steps on the final performance and explore using estimated data distributions to reduce performance prediction error.

Country of Origin
🇺🇸 United States

Page Count
19 pages

Category
Computer Science:
Computation and Language