Online Rubrics Elicitation from Pairwise Comparisons
By: MohammadHossein Rezaei , Robert Vacareanu , Zihao Wang and more
Potential Business Impact:
Teaches computers to write better answers by changing rules.
Rubrics provide a flexible way to train LLMs on open-ended long-form answers where verifiable rewards are not applicable and human preferences provide coarse signals. Prior work shows that reinforcement learning with rubric-based rewards leads to consistent gains in LLM post-training. Most existing approaches rely on rubrics that remain static over the course of training. Such static rubrics, however, are vulnerable to reward-hacking type behaviors and fail to capture emergent desiderata that arise during training. We introduce Online Rubrics Elicitation (OnlineRubrics), a method that dynamically curates evaluation criteria in an online manner through pairwise comparisons of responses from current and reference policies. This online process enables continuous identification and mitigation of errors as training proceeds. Empirically, this approach yields consistent improvements of up to 8% over training exclusively with static rubrics across AlpacaEval, GPQA, ArenaHard as well as the validation sets of expert questions and rubrics. We qualitatively analyze the elicited criteria and identify prominent themes such as transparency, practicality, organization, and reasoning.
Similar Papers
Auto-Rubric: Learning to Extract Generalizable Criteria for Reward Modeling
Machine Learning (CS)
Teaches AI to follow rules using fewer examples.
Rubrics as Rewards: Reinforcement Learning Beyond Verifiable Domains
Machine Learning (CS)
Teaches computers to follow rules better.
RubricRL: Simple Generalizable Rewards for Text-to-Image Generation
CV and Pattern Recognition
Makes AI art follow your exact instructions better.