MOA: Multi-Objective Alignment for Role-Playing Agents
By: Chonghua Liao , Ke Wang , Yuchuan Wu and more
Potential Business Impact:
Teaches AI to be good at many things at once.
Role-playing agents (RPAs) must simultaneously master many conflicting skills -- following multi-turn instructions, exhibiting domain knowledge, and adopting a consistent linguistic style. Existing work either relies on supervised fine-tuning (SFT) that over-fits surface cues and yields low diversity, or applies reinforcement learning (RL) that fails to learn multiple dimensions for comprehensive RPA optimization. We present MOA (Multi-Objective Alignment), a reinforcement-learning framework that enables multi-dimensional, fine-grained rubric optimization for general RPAs. MOA introduces a novel multi-objective optimization strategy that trains simultaneously on multiple fine-grained rubrics to boost optimization performance. Besides, to address the issues of model output diversity and quality, we have also employed thought-augmented rollout with off-policy guidance. Extensive experiments on challenging benchmarks such as PersonaGym and RoleMRC show that MOA enables an 8B model to match or even outperform strong baselines such as GPT-4o and Claude across numerous dimensions. This demonstrates the great potential of MOA in building RPAs that can simultaneously meet the demands of role knowledge, persona style, diverse scenarios, and complex multi-turn conversations.
Similar Papers
Pareto Multi-Objective Alignment for Language Models
Machine Learning (CS)
Helps AI learn to balance many different goals.
UC-MOA: Utility-Conditioned Multi-Objective Alignment for Distributional Pareto-Optimality
Computation and Language
Teaches AI to better understand what people want.
Improving Model Alignment Through Collective Intelligence of Open-Source LLMS
Computation and Language
Makes AI smarter by using many AI helpers.