HeartBench: Probing Core Dimensions of Anthropomorphic Intelligence in LLMs
By: Jiaxin Liu , Peiyi Tu , Wenyu Chen and more
Potential Business Impact:
Tests AI's feelings and ethics in Chinese.
While Large Language Models (LLMs) have achieved remarkable success in cognitive and reasoning benchmarks, they exhibit a persistent deficit in anthropomorphic intelligence-the capacity to navigate complex social, emotional, and ethical nuances. This gap is particularly acute in the Chinese linguistic and cultural context, where a lack of specialized evaluation frameworks and high-quality socio-emotional data impedes progress. To address these limitations, we present HeartBench, a framework designed to evaluate the integrated emotional, cultural, and ethical dimensions of Chinese LLMs. Grounded in authentic psychological counseling scenarios and developed in collaboration with clinical experts, the benchmark is structured around a theory-driven taxonomy comprising five primary dimensions and 15 secondary capabilities. We implement a case-specific, rubric-based methodology that translates abstract human-like traits into granular, measurable criteria through a ``reasoning-before-scoring'' evaluation protocol. Our assessment of 13 state-of-the-art LLMs indicates a substantial performance ceiling: even leading models achieve only 60% of the expert-defined ideal score. Furthermore, analysis using a difficulty-stratified ``Hard Set'' reveals a significant performance decay in scenarios involving subtle emotional subtexts and complex ethical trade-offs. HeartBench establishes a standardized metric for anthropomorphic AI evaluation and provides a methodological blueprint for constructing high-quality, human-aligned training data.
Similar Papers
Beyond Benchmark: LLMs Evaluation with an Anthropomorphic and Value-oriented Roadmap
Artificial Intelligence
Tests AI like a person for real-world use.
LLM Ethics Benchmark: A Three-Dimensional Assessment System for Evaluating Moral Reasoning in Large Language Models
Computers and Society
Tests if AI makes good and fair choices.
AIPsychoBench: Understanding the Psychometric Differences between LLMs and Humans
Computation and Language
Tests AI's mind better, in many languages.