EduEval: A Hierarchical Cognitive Benchmark for Evaluating Large Language Models in Chinese Education
By: Guoqing Ma , Jia Zhu , Hanghui Guo and more
Potential Business Impact:
Tests AI for schoolwork, finds strengths and weaknesses.
Large language models (LLMs) demonstrate significant potential for educational applications. However, their unscrutinized deployment poses risks to educational standards, underscoring the need for rigorous evaluation. We introduce EduEval, a comprehensive hierarchical benchmark for evaluating LLMs in Chinese K-12 education. This benchmark makes three key contributions: (1) Cognitive Framework: We propose the EduAbility Taxonomy, which unifies Bloom's Taxonomy and Webb's Depth of Knowledge to organize tasks across six cognitive dimensions including Memorization, Understanding, Application, Reasoning, Creativity, and Ethics. (2) Authenticity: Our benchmark integrates real exam questions, classroom conversation, student essays, and expert-designed prompts to reflect genuine educational challenges; (3) Scale: EduEval comprises 24 distinct task types with over 11,000 questions spanning primary to high school levels. We evaluate 14 leading LLMs under both zero-shot and few-shot settings, revealing that while models perform well on factual tasks, they struggle with classroom dialogue classification and exhibit inconsistent results in creative content generation. Interestingly, several open source models outperform proprietary systems on complex educational reasoning. Few-shot prompting shows varying effectiveness across cognitive dimensions, suggesting that different educational objectives require tailored approaches. These findings provide targeted benchmarking metrics for developing LLMs specifically optimized for diverse Chinese educational tasks.
Similar Papers
OmniEduBench: A Comprehensive Chinese Benchmark for Evaluating Large Language Models in Education
Computation and Language
Tests how well AI learns and thinks like students.
CPG-EVAL: A Multi-Tiered Benchmark for Evaluating the Chinese Pedagogical Grammar Competence of Large Language Models
Computation and Language
Tests AI's grammar skills for teaching languages.
AECBench: A Hierarchical Benchmark for Knowledge Evaluation of Large Language Models in the AEC Field
Computation and Language
Tests if AI can safely design buildings.