Do Persona-Infused LLMs Affect Performance in a Strategic Reasoning Game?
By: John Licato, Stephen Steinle, Brayden Hollis
Potential Business Impact:
Makes AI play strategy games better with roles.
Although persona prompting in large language models appears to trigger different styles of generated text, it is unclear whether these translate into measurable behavioral differences, much less whether they affect decision-making in an adversarial strategic environment that we provide as open-source. We investigate the impact of persona prompting on strategic performance in PERIL, a world-domination board game. Specifically, we compare the effectiveness of persona-derived heuristic strategies to those chosen manually. Our findings reveal that certain personas associated with strategic thinking improve game performance, but only when a mediator is used to translate personas into heuristic values. We introduce this mediator as a structured translation process, inspired by exploratory factor analysis, that maps LLM-generated inventory responses into heuristics. Results indicate our method enhances heuristic reliability and face validity compared to directly inferred heuristics, allowing us to better study the effect of persona types on decision making. These insights advance our understanding of how persona prompting influences LLM-based decision-making and propose a heuristic generation method that applies psychometric principles to LLMs.
Similar Papers
Principled Personas: Defining and Measuring the Intended Effects of Persona Prompting on Task Performance
Computation and Language
Makes AI smarter by telling it who to be.
Synthetic Socratic Debates: Examining Persona Effects on Moral Decision and Persuasion Dynamics
Computation and Language
AI's personality changes how it argues about right and wrong.
Misalignment of LLM-Generated Personas with Human Perceptions in Low-Resource Settings
Computers and Society
AI personalities don't understand people like real humans.