Principled Personas: Defining and Measuring the Intended Effects of Persona Prompting on Task Performance
By: Pedro Henrique Luz de Araujo , Paul Röttger , Dirk Hovy and more
Potential Business Impact:
Makes AI smarter by telling it who to be.
Expert persona prompting -- assigning roles such as expert in math to language models -- is widely used for task improvement. However, prior work shows mixed results on its effectiveness, and does not consider when and why personas should improve performance. We analyze the literature on persona prompting for task improvement and distill three desiderata: 1) performance advantage of expert personas, 2) robustness to irrelevant persona attributes, and 3) fidelity to persona attributes. We then evaluate 9 state-of-the-art LLMs across 27 tasks with respect to these desiderata. We find that expert personas usually lead to positive or non-significant performance changes. Surprisingly, models are highly sensitive to irrelevant persona details, with performance drops of almost 30 percentage points. In terms of fidelity, we find that while higher education, specialization, and domain-relatedness can boost performance, their effects are often inconsistent or negligible across tasks. We propose mitigation strategies to improve robustness -- but find they only work for the largest, most capable models. Our findings underscore the need for more careful persona design and for evaluation schemes that reflect the intended effects of persona usage.
Similar Papers
Prompting Science Report 4: Playing Pretend: Expert Personas Don't Improve Factual Accuracy
Computation and Language
Giving AI pretend jobs doesn't help it answer questions.
No for Some, Yes for Others: Persona Prompts and Other Sources of False Refusal in Language Models
Computation and Language
AI sometimes refuses requests based on fake identities.
Profile-LLM: Dynamic Profile Optimization for Realistic Personality Expression in LLMs
Computation and Language
Makes AI talk like a real person.