Polypersona: Persona-Grounded LLM for Synthetic Survey Responses
By: Tejaswani Dash , Dinesh Karri , Anudeep Vurity and more
Potential Business Impact:
Makes computers answer surveys like real people.
This paper introduces PolyPersona, a generative framework for synthesizing persona-conditioned survey responses across multiple domains. The framework instruction-tunes compact chat models using parameter-efficient LoRA adapters with 4-bit quantization under a resource-adaptive training setup. A dialogue-based data pipeline explicitly preserves persona cues, ensuring consistent behavioral alignment across generated responses. Using this pipeline, we construct a dataset of 3,568 synthetic survey responses spanning ten domains and 433 distinct personas, enabling controlled instruction tuning and systematic multi-domain evaluation. We evaluate the generated responses using a multi-metric evaluation suite that combines standard text generation metrics, including BLEU, ROUGE, and BERTScore, with survey-specific metrics designed to assess structural coherence, stylistic consistency, and sentiment alignment.Experimental results show that compact models such as TinyLlama 1.1B and Phi-2 achieve performance comparable to larger 7B to 8B baselines, with a highest BLEU score of 0.090 and ROUGE-1 of 0.429. These findings demonstrate that persona-conditioned fine-tuning enables small language models to generate reliable and coherent synthetic survey data. The proposed framework provides an efficient and reproducible approach for survey data generation, supporting scalable evaluation while facilitating bias analysis through transparent and open protocols.
Similar Papers
DeepPersona: A Generative Engine for Scaling Deep Synthetic Personas
Artificial Intelligence
Creates AI with real-life personalities.
Whose Personae? Synthetic Persona Experiments in LLM Research and Pathways to Transparency
Computers and Society
Makes AI understand people better and more fairly.
Persistent Personas? Role-Playing, Instruction Following, and Safety in Extended Interactions
Computation and Language
AI characters forget who they are in long talks.