Character-R1: Enhancing Role-Aware Reasoning in Role-Playing Agents via RLVR
By: Yihong Tang , Kehai Chen , Xuefeng Bai and more
Potential Business Impact:
Makes game characters act more real and consistent.
Current role-playing agents (RPAs) are typically constructed by imitating surface-level behaviors, but this approach lacks internal cognitive consistency, often causing out-of-character errors in complex situations. To address this, we propose Character-R1, a framework designed to provide comprehensive verifiable reward signals for effective role-aware reasoning, which are missing in recent studies. Specifically, our framework comprises three core designs: (1) Cognitive Focus Reward, which enforces explicit label-based analysis of 10 character elements (e.g., worldview) to structure internal cognition; (2) Reference-Guided Reward, which utilizes overlap-based metrics with reference responses as optimization anchors to enhance exploration and performance; and (3) Character-Conditioned Reward Normalization, which adjusts reward distributions based on character categories to ensure robust optimization across heterogeneous roles. Extensive experiments demonstrate that Character-R1 significantly outperforms existing methods in knowledge, memory and others.
Similar Papers
Thinking in Character: Advancing Role-Playing Agents with Role-Aware Reasoning
Computation and Language
Makes AI characters think and act like real people.
CogDual: Enhancing Dual Cognition of LLMs via Reinforcement Learning with Implicit Rule-Based Rewards
Computation and Language
Makes computer characters act more real.
ChARM: Character-based Act-adaptive Reward Modeling for Advanced Role-Playing Language Agents
Computation and Language
Makes chatbots act more like real people.