An Analysis of Large Language Models for Simulating User Responses in Surveys
By: Ziyun Yu , Yiru Zhou , Chen Zhao and more
Potential Business Impact:
Helps computers understand many different opinions.
Using Large Language Models (LLMs) to simulate user opinions has received growing attention. Yet LLMs, especially trained with reinforcement learning from human feedback (RLHF), are known to exhibit biases toward dominant viewpoints, raising concerns about their ability to represent users from diverse demographic and cultural backgrounds. In this work, we examine the extent to which LLMs can simulate human responses to cross-domain survey questions through direct prompting and chain-of-thought prompting. We further propose a claim diversification method CLAIMSIM, which elicits viewpoints from LLM parametric knowledge as contextual input. Experiments on the survey question answering task indicate that, while CLAIMSIM produces more diverse responses, both approaches struggle to accurately simulate users. Further analysis reveals two key limitations: (1) LLMs tend to maintain fixed viewpoints across varying demographic features, and generate single-perspective claims; and (2) when presented with conflicting claims, LLMs struggle to reason over nuanced differences among demographic features, limiting their ability to adapt responses to specific user profiles.
Similar Papers
Evaluating Contrastive Feedback for Effective User Simulations
Information Retrieval
Teaches computers to act like real people searching.
Social Simulations with Large Language Model Risk Utopian Illusion
Computation and Language
Computers show fake, too-nice people in chats.
Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges
Artificial Intelligence
Lets computer characters act more like real people.