SocioBench: Modeling Human Behavior in Sociological Surveys with Large Language Models
By: Jia Wang , Ziyu Zhao , Tingjuntao Ni and more
Potential Business Impact:
Helps computers understand how people think.
Large language models (LLMs) show strong potential for simulating human social behaviors and interactions, yet lack large-scale, systematically constructed benchmarks for evaluating their alignment with real-world social attitudes. To bridge this gap, we introduce SocioBench-a comprehensive benchmark derived from the annually collected, standardized survey data of the International Social Survey Programme (ISSP). The benchmark aggregates over 480,000 real respondent records from more than 30 countries, spanning 10 sociological domains and over 40 demographic attributes. Our experiments indicate that LLMs achieve only 30-40% accuracy when simulating individuals in complex survey scenarios, with statistically significant differences across domains and demographic subgroups. These findings highlight several limitations of current LLMs in survey scenarios, including insufficient individual-level data coverage, inadequate scenario diversity, and missing group-level modeling.
Similar Papers
SimBench: Benchmarking the Ability of Large Language Models to Simulate Human Behaviors
Computation and Language
Tests if AI acts like real people.
SimBench: Benchmarking the Ability of Large Language Models to Simulate Human Behaviors
Computation and Language
Tests how well AI imitates people's actions.
SimBench: Benchmarking the Ability of Large Language Models to Simulate Human Behaviors
Computation and Language
Tests if AI acts like real people.