Understanding Generalization in Role-Playing Models via Information Theory
By: Yongqi Li , Hao Lang , Fei Huang and more
Potential Business Impact:
Helps AI understand why it makes mistakes.
Role-playing models (RPMs) are widely used in real-world applications but underperform when deployed in the wild. This degradation can be attributed to distribution shifts, including user, character, and dialogue compositional shifts. Existing methods like LLM-as-a-judge fall short in providing a fine-grained diagnosis of how these shifts affect RPM generalization, and thus there lack formal frameworks to characterize RPM generalization behaviors. To bridge these gaps, we introduce an information-theoretic metric, named reasoning-based effective mutual information difference (R-EMID), to measure RPM performance degradation in an interpretable way. We also derive an upper bound on R-EMID to predict the worst-case generalization performance of RPMs and theoretically reveal how various shifts contribute to the RPM performance degradation. Moreover, we propose a co-evolving reinforcement learning framework to adaptively model the connection among user, character, and dialogue context and thus enhance the estimation of dialogue response generation probability, which is critical for calculating R-EMID. Finally, we evaluate the generalization performance of various RPMs using R-EMID, finding that user shift poses the highest risk among all shifts and reinforcement learning is the most effective approach for enhancing RPM generalization.
Similar Papers
Improving LLM Reasoning through Interpretable Role-Playing Steering
Computation and Language
Makes AI better at thinking by controlling its "thoughts."
DIO: Refining Mutual Information and Causal Chain to Enhance Machine Abstract Reasoning Ability
CV and Pattern Recognition
Teaches computers to think and solve puzzles.
DIO: Refining Mutual Information and Causal Chain to Enhance Machine Abstract Reasoning Ability
CV and Pattern Recognition
Teaches computers to think and solve puzzles.