Synthesizing Public Opinions with LLMs: Role Creation, Impacts, and the Future to eDemorcacy
By: Rabimba Karanjai , Boris Shor , Amanda Austin and more
Potential Business Impact:
Makes computers guess what people think better.
This paper investigates the use of Large Language Models (LLMs) to synthesize public opinion data, addressing challenges in traditional survey methods like declining response rates and non-response bias. We introduce a novel technique: role creation based on knowledge injection, a form of in-context learning that leverages RAG and specified personality profiles from the HEXACO model and demographic information, and uses that for dynamically generated prompts. This method allows LLMs to simulate diverse opinions more accurately than existing prompt engineering approaches. We compare our results with pre-trained models with standard few-shot prompts. Experiments using questions from the Cooperative Election Study (CES) demonstrate that our role-creation approach significantly improves the alignment of LLM-generated opinions with real-world human survey responses, increasing answer adherence. In addition, we discuss challenges, limitations and future research directions.
Similar Papers
Should you use LLMs to simulate opinions? Quality checks for early-stage deliberation
Computers and Society
Tests if AI opinions are trustworthy for surveys.
An Analysis of Large Language Models for Simulating User Responses in Surveys
Computation and Language
Helps computers understand many different opinions.
Emulating Public Opinion: A Proof-of-Concept of AI-Generated Synthetic Survey Responses for the Chilean Case
Computation and Language
Computers can answer survey questions like people.