Beyond Survival: Evaluating LLMs in Social Deduction Games with Human-Aligned Strategies
By: Zirui Song , Yuan Huang , Junchang Liu and more
Potential Business Impact:
Teaches computers to play social games better.
Social deduction games like Werewolf combine language, reasoning, and strategy, providing a testbed for studying natural language and social intelligence. However, most studies reduce the game to LLM-based self-play, yielding templated utterances and anecdotal cases that overlook the richness of social gameplay. Evaluation further relies on coarse metrics such as survival time or subjective scoring due to the lack of quality reference data. To address these gaps, we curate a high-quality, human-verified multimodal Werewolf dataset containing over 100 hours of video, 32.4M utterance tokens, and 15 rule variants. Based on this dataset, we propose a novel strategy-alignment evaluation that leverages the winning faction's strategies as ground truth in two stages: 1) Speech evaluation, formulated as multiple-choice-style tasks that assess whether the model can adopt appropriate stances across five dimensions of social ability; and 2) Decision evaluation, which assesses the model's voting choices and opponent-role inferences. This framework enables a fine-grained evaluation of models' linguistic and reasoning capabilities, while capturing their ability to generate strategically coherent gameplay. Our experiments show that state-of-the-art LLMs show diverse performance, with roughly half remain below 0.50, revealing clear gaps in deception and counterfactual reasoning. We hope our dataset further inspires research on language, reasoning, and strategy in multi-agent interaction.
Similar Papers
WOLF: Werewolf-based Observations for LLM Deception and Falsehoods
Multiagent Systems
Helps AI learn to lie and spot lies.
Verbal Werewolf: Engage Users with Verbalized Agentic Werewolf Game Framework
Computation and Language
AI plays talking games with you in real-time.
LLMs Judge Themselves: A Game-Theoretic Framework for Human-Aligned Evaluation
Computation and Language
Lets AI judge other AI's answers fairly.