Score: 2

Persistent Personas? Role-Playing, Instruction Following, and Safety in Extended Interactions

Published: December 14, 2025 | arXiv ID: 2512.12775v1

By: Pedro Henrique Luz de Araujo , Michael A. Hedderich , Ali Modarressi and more

Potential Business Impact:

AI characters forget who they are in long talks.

Business Areas:
Virtual World Community and Lifestyle, Media and Entertainment, Software

Persona-assigned large language models (LLMs) are used in domains such as education, healthcare, and sociodemographic simulation. Yet, they are typically evaluated only in short, single-round settings that do not reflect real-world usage. We introduce an evaluation protocol that combines long persona dialogues (over 100 rounds) and evaluation datasets to create dialogue-conditioned benchmarks that can robustly measure long-context effects. We then investigate the effects of dialogue length on persona fidelity, instruction-following, and safety of seven state-of-the-art open- and closed-weight LLMs. We find that persona fidelity degrades over the course of dialogues, especially in goal-oriented conversations, where models must sustain both persona fidelity and instruction following. We identify a trade-off between fidelity and instruction following, with non-persona baselines initially outperforming persona-assigned models; as dialogues progress and fidelity fades, persona responses become increasingly similar to baseline responses. Our findings highlight the fragility of persona applications in extended interactions and our work provides a protocol to systematically measure such failures.

Country of Origin
🇦🇹 Austria

Repos / Data Links

Page Count
31 pages

Category
Computer Science:
Computation and Language