Beyond Benchmark: LLMs Evaluation with an Anthropomorphic and Value-oriented Roadmap
By: Jun Wang , Ninglun Gu , Kailai Zhang and more
Potential Business Impact:
Tests AI like a person for real-world use.
For Large Language Models (LLMs), a disconnect persists between benchmark performance and real-world utility. Current evaluation frameworks remain fragmented, prioritizing technical metrics while neglecting holistic assessment for deployment. This survey introduces an anthropomorphic evaluation paradigm through the lens of human intelligence, proposing a novel three-dimensional taxonomy: Intelligence Quotient (IQ)-General Intelligence for foundational capacity, Emotional Quotient (EQ)-Alignment Ability for value-based interactions, and Professional Quotient (PQ)-Professional Expertise for specialized proficiency. For practical value, we pioneer a Value-oriented Evaluation (VQ) framework assessing economic viability, social impact, ethical alignment, and environmental sustainability. Our modular architecture integrates six components with an implementation roadmap. Through analysis of 200+ benchmarks, we identify key challenges including dynamic assessment needs and interpretability gaps. It provides actionable guidance for developing LLMs that are technically proficient, contextually relevant, and ethically sound. We maintain a curated repository of open-source evaluation resources at: https://github.com/onejune2018/Awesome-LLM-Eval.
Similar Papers
HeartBench: Probing Core Dimensions of Anthropomorphic Intelligence in LLMs
Computation and Language
Tests AI's feelings and ethics in Chinese.
Large Language Model Psychometrics: A Systematic Review of Evaluation, Validation, and Enhancement
Computation and Language
Tests AI like people's minds.
LLM-Crowdsourced: A Benchmark-Free Paradigm for Mutual Evaluation of Large Language Models
Artificial Intelligence
Tests AI better by having AI ask and answer.