Score: 0

Characterizing Knowledge Graph Tasks in LLM Benchmarks Using Cognitive Complexity Frameworks

Published: September 17, 2025 | arXiv ID: 2509.19347v1

By: Sara Todorovikj, Lars-Peter Meyer, Michael Martin

Potential Business Impact:

Makes AI understand hard questions better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are increasingly used for tasks involving Knowledge Graphs (KGs), whose evaluation typically focuses on accuracy and output correctness. We propose a complementary task characterization approach using three complexity frameworks from cognitive psychology. Applying this to the LLM-KG-Bench framework, we highlight value distributions, identify underrepresented demands and motivate richer interpretation and diversity for benchmark evaluation tasks.

Country of Origin
🇩🇪 Germany

Page Count
6 pages

Category
Computer Science:
Computation and Language