HUMANLLM: Benchmarking and Reinforcing LLM Anthropomorphism via Human Cognitive Patterns
By: Xintao Wang , Jian Yang , Weiyuan Li and more
Large Language Models (LLMs) have demonstrated remarkable capabilities in reasoning and generation, serving as the foundation for advanced persona simulation and Role-Playing Language Agents (RPLAs). However, achieving authentic alignment with human cognitive and behavioral patterns remains a critical challenge for these agents. We present HUMANLLM, a framework treating psychological patterns as interacting causal forces. We construct 244 patterns from ~12,000 academic papers and synthesize 11,359 scenarios where 2-5 patterns reinforce, conflict, or modulate each other, with multi-turn conversations expressing inner thoughts, actions, and dialogue. Our dual-level checklists evaluate both individual pattern fidelity and emergent multi-pattern dynamics, achieving strong human alignment (r=0.91) while revealing that holistic metrics conflate simulation accuracy with social desirability. HUMANLLM-8B outperforms Qwen3-32B on multi-pattern dynamics despite 4x fewer parameters, demonstrating that authentic anthropomorphism requires cognitive modeling--simulating not just what humans do, but the psychological processes generating those behaviors.
Similar Papers
Humanizing Machines: Rethinking LLM Anthropomorphism Through a Multi-Level Framework of Design
Computation and Language
Makes AI seem more human to help us.
Humanizing Machines: Rethinking LLM Anthropomorphism Through a Multi-Level Framework of Design
Computation and Language
Makes AI seem more human to help us use it.
Large Language Models Show Signs of Alignment with Human Neurocognition During Abstract Reasoning
Neurons and Cognition
Computers learn to think like humans.