LLM Personas as a Substitute for Field Experiments in Method Benchmarking
By: Enoch Hyunwook Kang
Field experiments (A/B tests) are often the most credible benchmark for methods in societal systems, but their cost and latency create a major bottleneck for iterative method development. LLM-based persona simulation offers a cheap synthetic alternative, yet it is unclear whether replacing humans with personas preserves the benchmark interface that adaptive methods optimize against. We prove an if-and-only-if characterization: when (i) methods observe only the aggregate outcome (aggregate-only observation) and (ii) evaluation depends only on the submitted artifact and not on the algorithm's identity or provenance (algorithm-blind evaluation), swapping humans for personas is just panel change from the method's point of view, indistinguishable from changing the evaluation population (e.g., New York to Jakarta). Furthermore, we move from validity to usefulness: we define an information-theoretic discriminability of the induced aggregate channel and show that making persona benchmarking as decision-relevant as a field experiment is fundamentally a sample-size question, yielding explicit bounds on the number of independent persona evaluations required to reliably distinguish meaningfully different methods at a chosen resolution.
Similar Papers
Whose Personae? Synthetic Persona Experiments in LLM Research and Pathways to Transparency
Computers and Society
Makes AI understand people better and more fairly.
PersonaFeedback: A Large-scale Human-annotated Benchmark For Personalization
Computation and Language
Tests if AI can give personalized answers.
Scaling Law in LLM Simulated Personality: More Detailed and Realistic Persona Profile Is All You Need
Computers and Society
Computers can now pretend to be people.