Score: 0

P-Check: Advancing Personalized Reward Model via Learning to Generate Dynamic Checklist

Published: January 6, 2026 | arXiv ID: 2601.02986v1

By: Kwangwook Seo, Dongha Lee

Potential Business Impact:

Teaches computers to judge things like you.

Business Areas:
Personalization Commerce and Shopping

Recent approaches in personalized reward modeling have primarily focused on leveraging user interaction history to align model judgments with individual preferences. However, existing approaches largely treat user context as a static or implicit conditioning signal, failing to capture the dynamic and multi-faceted nature of human judgment. In this paper, we propose P-Check, a novel personalized reward modeling framework, designed to train a plug-and-play checklist generator that synthesizes dynamic evaluation criteria for guiding the reward prediction. To better align these checklists with personalized nuances, we introduce Preference-Contrastive Criterion Weighting, a training strategy that assigns saliency scores to criteria based on their discriminative power for personalized judgment. We conduct extensive experiments and demonstrate that P-Check not only improves reward accuracy but also enhances downstream personalized generation, and remains robust in OOD scenarios.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
24 pages

Category
Computer Science:
Computation and Language