Persistent Personas? Role-Playing, Instruction Following, and Safety in Extended Interactions
By: Pedro Henrique Luz de Araujo , Michael A. Hedderich , Ali Modarressi and more
Potential Business Impact:
AI characters forget who they are in long talks.
Persona-assigned large language models (LLMs) are used in domains such as education, healthcare, and sociodemographic simulation. Yet, they are typically evaluated only in short, single-round settings that do not reflect real-world usage. We introduce an evaluation protocol that combines long persona dialogues (over 100 rounds) and evaluation datasets to create dialogue-conditioned benchmarks that can robustly measure long-context effects. We then investigate the effects of dialogue length on persona fidelity, instruction-following, and safety of seven state-of-the-art open- and closed-weight LLMs. We find that persona fidelity degrades over the course of dialogues, especially in goal-oriented conversations, where models must sustain both persona fidelity and instruction following. We identify a trade-off between fidelity and instruction following, with non-persona baselines initially outperforming persona-assigned models; as dialogues progress and fidelity fades, persona responses become increasingly similar to baseline responses. Our findings highlight the fragility of persona applications in extended interactions and our work provides a protocol to systematically measure such failures.
Similar Papers
Consistently Simulating Human Personas with Multi-Turn Reinforcement Learning
Computation and Language
Keeps AI characters acting like themselves.
Misalignment of LLM-Generated Personas with Human Perceptions in Low-Resource Settings
Computers and Society
AI personalities don't understand people like real humans.
Two-Faced Social Agents: Context Collapse in Role-Conditioned Large Language Models
Computers and Society
AI models struggle to act like different people.