SocialEval: Evaluating Social Intelligence of Large Language Models
By: Jinfeng Zhou , Yuxuan Chen , Yihan Shi and more
Potential Business Impact:
Helps computers understand and act like people.
LLMs exhibit promising Social Intelligence (SI) in modeling human behavior, raising the need to evaluate LLMs' SI and their discrepancy with humans. SI equips humans with interpersonal abilities to behave wisely in navigating social interactions to achieve social goals. This presents an operational evaluation paradigm: outcome-oriented goal achievement evaluation and process-oriented interpersonal ability evaluation, which existing work fails to address. To this end, we propose SocialEval, a script-based bilingual SI benchmark, integrating outcome- and process-oriented evaluation by manually crafting narrative scripts. Each script is structured as a world tree that contains plot lines driven by interpersonal ability, providing a comprehensive view of how LLMs navigate social interactions. Experiments show that LLMs fall behind humans on both SI evaluations, exhibit prosociality, and prefer more positive social behaviors, even if they lead to goal failure. Analysis of LLMs' formed representation space and neuronal activations reveals that LLMs have developed ability-specific functional partitions akin to the human brain.
Similar Papers
SI-Bench: Benchmarking Social Intelligence of Large Language Models in Human-to-Human Conversations
Computation and Language
Tests how well AI understands people talking.
SocioBench: Modeling Human Behavior in Sociological Surveys with Large Language Models
Social and Information Networks
Helps computers understand how people think.
Integrating LLM in Agent-Based Social Simulation: Opportunities and Challenges
Artificial Intelligence
Lets computer characters act more like real people.