Score: 2

Consistently Simulating Human Personas with Multi-Turn Reinforcement Learning

Published: October 31, 2025 | arXiv ID: 2511.00222v1

By: Marwa Abdulhai , Ryan Cheng , Donovan Clay and more

BigTech Affiliations: University of California, Berkeley

Potential Business Impact:

Keeps AI characters acting like themselves.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are increasingly used to simulate human users in interactive settings such as therapy, education, and social role-play. While these simulations enable scalable training and evaluation of AI agents, off-the-shelf LLMs often drift from their assigned personas, contradict earlier statements, or abandon role-appropriate behavior. We introduce a unified framework for evaluating and improving persona consistency in LLM-generated dialogue. We define three automatic metrics: prompt-to-line consistency, line-to-line consistency, and Q&A consistency, that capture different types of persona drift and validate each against human annotations. Using these metrics as reward signals, we apply multi-turn reinforcement learning to fine-tune LLMs for three user roles: a patient, a student, and a social chat partner. Our method reduces inconsistency by over 55%, resulting in more coherent and faithful simulated users.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
38 pages

Category
Computer Science:
Computation and Language