RoleRMBench & RoleRM: Towards Reward Modeling for Profile-Based Role Play in Dialogue Systems
By: Hang Ding , Qiming Feng , Dongqi Liu and more
Potential Business Impact:
Makes AI better at pretending to be characters.
Reward modeling has become a cornerstone of aligning large language models (LLMs) with human preferences. Yet, when extended to subjective and open-ended domains such as role play, existing reward models exhibit severe degradation, struggling to capture nuanced and persona-grounded human judgments. To address this gap, we introduce RoleRMBench, the first systematic benchmark for reward modeling in role-playing dialogue, covering seven fine-grained capabilities from narrative management to role consistency and engagement. Evaluation on RoleRMBench reveals large and consistent gaps between general-purpose reward models and human judgment, particularly in narrative and stylistic dimensions. We further propose RoleRM, a reward model trained with Continuous Implicit Preferences (CIP), which reformulates subjective evaluation as continuous consistent pairwise supervision under multiple structuring strategies. Comprehensive experiments show that RoleRM surpasses strong open- and closed-source reward models by over 24% on average, demonstrating substantial gains in narrative coherence and stylistic fidelity. Our findings highlight the importance of continuous preference representation and annotation consistency, establishing a foundation for subjective alignment in human-centered dialogue systems.
Similar Papers
RMTBench: Benchmarking LLMs Through Multi-Turn User-Centric Role-Playing
Computation and Language
Tests how well AI can pretend to be people.
One Model to Critique Them All: Rewarding Agentic Tool-Use via Efficient Reasoning
Artificial Intelligence
Helps AI use tools better and smarter.
Omni-Reward: Towards Generalist Omni-Modal Reward Modeling with Free-Form Preferences
Computation and Language
AI learns what you like in any format.