Score: 1

Hypothesis Testing for Quantifying LLM-Human Misalignment in Multiple Choice Settings

Published: June 17, 2025 | arXiv ID: 2506.14997v1

By: Harbin Hong, Sebastian Caldas, Liu Leqi

BigTech Affiliations: Princeton University

Potential Business Impact:

Tests if computer brains copy people's choices.

Business Areas:
A/B Testing Data and Analytics

As Large Language Models (LLMs) increasingly appear in social science research (e.g., economics and marketing), it becomes crucial to assess how well these models replicate human behavior. In this work, using hypothesis testing, we present a quantitative framework to assess the misalignment between LLM-simulated and actual human behaviors in multiple-choice survey settings. This framework allows us to determine in a principled way whether a specific language model can effectively simulate human opinions, decision-making, and general behaviors represented through multiple-choice options. We applied this framework to a popular language model for simulating people's opinions in various public surveys and found that this model is ill-suited for simulating the tested sub-populations (e.g., across different races, ages, and incomes) for contentious questions. This raises questions about the alignment of this language model with the tested populations, highlighting the need for new practices in using LLMs for social science studies beyond naive simulations of human subjects.

Country of Origin
🇺🇸 United States

Page Count
11 pages

Category
Computer Science:
Computers and Society