A Systematic Analysis of Base Model Choice for Reward Modeling
By: Kian Ahrabian , Pegah Jandaghi , Negar Mokhberian and more
Potential Business Impact:
Improves AI writing by picking the best starting AI.
Reinforcement learning from human feedback (RLHF) and, at its core, reward modeling have become a crucial part of training powerful large language models (LLMs). One commonly overlooked factor in training high-quality reward models (RMs) is the effect of the base model, which is becoming more challenging to choose given the rapidly growing pool of LLMs. In this work, we present a systematic analysis of the effect of base model selection on reward modeling performance. Our results show that the performance can be improved by up to 14% compared to the most common (i.e., default) choice. Moreover, we showcase the strong statistical relation between some existing benchmarks and downstream performances. We also demonstrate that the results from a small set of benchmarks could be combined to boost the model selection ($+$18% on average in the top 5-10). Lastly, we illustrate the impact of different post-training steps on the final performance and explore using estimated data distributions to reduce performance prediction error.
Similar Papers
BaseReward: A Strong Baseline for Multimodal Reward Model
CV and Pattern Recognition
Teaches AI to understand and judge images and text.
Robust Reinforcement Learning from Human Feedback for Large Language Models Fine-Tuning
Machine Learning (Stat)
Makes AI understand what people want better.
LoRe: Personalizing LLMs via Low-Rank Reward Modeling
Machine Learning (CS)
Teaches AI to learn what you like.