P-Check: Advancing Personalized Reward Model via Learning to Generate Dynamic Checklist
By: Kwangwook Seo, Dongha Lee
Potential Business Impact:
Teaches computers to judge things like you.
Recent approaches in personalized reward modeling have primarily focused on leveraging user interaction history to align model judgments with individual preferences. However, existing approaches largely treat user context as a static or implicit conditioning signal, failing to capture the dynamic and multi-faceted nature of human judgment. In this paper, we propose P-Check, a novel personalized reward modeling framework, designed to train a plug-and-play checklist generator that synthesizes dynamic evaluation criteria for guiding the reward prediction. To better align these checklists with personalized nuances, we introduce Preference-Contrastive Criterion Weighting, a training strategy that assigns saliency scores to criteria based on their discriminative power for personalized judgment. We conduct extensive experiments and demonstrate that P-Check not only improves reward accuracy but also enhances downstream personalized generation, and remains robust in OOD scenarios.
Similar Papers
Checklists Are Better Than Reward Models For Aligning Language Models
Computation and Language
Teaches computers to follow all kinds of instructions.
The Reward Model Selection Crisis in Personalized Alignment
Artificial Intelligence
Helps AI learn what you really want.
Towards Faithful and Controllable Personalization via Critique-Post-Edit Reinforcement Learning
Computation and Language
Teaches AI to write exactly how you like it.