UserLM-R1: Modeling Human Reasoning in User Language Models with Multi-Reward Reinforcement Learning
By: Feng Zhang , Shijia Li , Chunmao Zhang and more
User simulators serve as the critical interactive environment for agent post-training, and an ideal user simulator generalizes across domains and proactively engages in negotiation by challenging or bargaining. However, current methods exhibit two issues. They rely on static and context-unaware profiles, necessitating extensive manual redesign for new scenarios, thus limiting generalizability. Moreover, they neglect human strategic thinking, leading to vulnerability to agent manipulation. To address these issues, we propose UserLM-R1, a novel user language model with reasoning capability. Specifically, we first construct comprehensive user profiles with both static roles and dynamic scenario-specific goals for adaptation to diverse scenarios. Then, we propose a goal-driven decision-making policy to generate high-quality rationales before producing responses, and further refine the reasoning and improve strategic capabilities with supervised fine-tuning and multi-reward reinforcement learning. Extensive experimental results demonstrate that UserLM-R1 outperforms competitive baselines, particularly on the more challenging adversarial set.
Similar Papers
Shop-R1: Rewarding LLMs to Simulate Human Behavior in Online Shopping via Reinforcement Learning
Computation and Language
Teaches computers to shop like people.
Reasoning LLMs for User-Aware Multimodal Conversational Agents
Human-Computer Interaction
Robot learns about you instantly for better chats.
rSIM: Incentivizing Reasoning Capabilities of LLMs via Reinforced Strategy Injection
Artificial Intelligence
Teaches computers to think better and solve harder problems.