Scaling Law in LLM Simulated Personality: More Detailed and Realistic Persona Profile Is All You Need
By: Yuqi Bai , Tianyu Huang , Kun Sun and more
Potential Business Impact:
Computers can now pretend to be people.
This research focuses on using large language models (LLMs) to simulate social experiments, exploring their ability to emulate human personality in virtual persona role-playing. The research develops an end-to-end evaluation framework, including individual-level analysis of stability and identifiability, as well as population-level analysis called progressive personality curves to examine the veracity and consistency of LLMs in simulating human personality. Methodologically, this research proposes important modifications to traditional psychometric approaches (CFA and construct validity) which are unable to capture improvement trends in LLMs at their current low-level simulation, potentially leading to remature rejection or methodological misalignment. The main contributions of this research are: proposing a systematic framework for LLM virtual personality evaluation; empirically demonstrating the critical role of persona detail in personality simulation quality; and identifying marginal utility effects of persona profiles, especially a Scaling Law in LLM personality simulation, offering operational evaluation metrics and a theoretical foundation for applying large language models in social science experiments.
Similar Papers
In Silico Development of Psychometric Scales: Feasibility of Representative Population Data Simulation with LLMs
Human-Computer Interaction
Lets computers create fake people for testing.
Social Simulations with Large Language Model Risk Utopian Illusion
Computation and Language
Computers show fake, too-nice people in chats.
Profile-LLM: Dynamic Profile Optimization for Realistic Personality Expression in LLMs
Computation and Language
Makes AI talk like a real person.