HER: Human-like Reasoning and Reinforcement Learning for LLM Role-playing
By: Chengyu Du , Xintao Wang , Aili Chen and more
Potential Business Impact:
Makes AI characters think like real people.
LLM role-playing, i.e., using LLMs to simulate specific personas, has emerged as a key capability in various applications, such as companionship, content creation, and digital games. While current models effectively capture character tones and knowledge, simulating the inner thoughts behind their behaviors remains a challenge. Towards cognitive simulation in LLM role-play, previous efforts mainly suffer from two deficiencies: data with high-quality reasoning traces, and reliable reward signals aligned with human preferences. In this paper, we propose HER, a unified framework for cognitive-level persona simulation. HER introduces dual-layer thinking, which distinguishes characters' first-person thinking from LLMs' third-person thinking. To bridge these gaps, we curate reasoning-augmented role-playing data via reverse engineering and construct human-aligned principles and reward models. Leveraging these resources, we train \method models based on Qwen3-32B via supervised and reinforcement learning. Extensive experiments validate the effectiveness of our approach. Notably, our models significantly outperform the Qwen3-32B baseline, achieving a 30.26 improvement on the CoSER benchmark and a 14.97 gain on the Minimax Role-Play Bench. Our datasets, principles, and models will be released to facilitate future research.
Similar Papers
Personality-Aware Reinforcement Learning for Persuasive Dialogue with LLM-Driven Simulation
Human-Computer Interaction
Helps computers persuade people better by understanding them.
UserLM-R1: Modeling Human Reasoning in User Language Models with Multi-Reward Reinforcement Learning
Computation and Language
Teaches AI to bargain and negotiate like people.
Reasoning Does Not Necessarily Improve Role-Playing Ability
Computation and Language
Makes AI better at pretending to be people.