Score: 1

A Comprehensive Survey of Reward Models: Taxonomy, Applications, Challenges, and Future

Published: April 12, 2025 | arXiv ID: 2504.12328v1

By: Jialun Zhong , Wei Shen , Yanzeng Li and more

Potential Business Impact:

Teaches computers to act how people want.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Reward Model (RM) has demonstrated impressive potential for enhancing Large Language Models (LLM), as RM can serve as a proxy for human preferences, providing signals to guide LLMs' behavior in various tasks. In this paper, we provide a comprehensive overview of relevant research, exploring RMs from the perspectives of preference collection, reward modeling, and usage. Next, we introduce the applications of RMs and discuss the benchmarks for evaluation. Furthermore, we conduct an in-depth analysis of the challenges existing in the field and dive into the potential research directions. This paper is dedicated to providing beginners with a comprehensive introduction to RMs and facilitating future studies. The resources are publicly available at github\footnote{https://github.com/JLZhong23/awesome-reward-models}.

Country of Origin
🇨🇳 China

Repos / Data Links

Page Count
38 pages

Category
Computer Science:
Computation and Language