Score: 0

DEEPQUESTION: Systematic Generation of Real-World Challenges for Evaluating LLMs Performance

Published: May 30, 2025 | arXiv ID: 2505.24532v1

By: Ali Khoramfar , Ali Ramezani , Mohammad Mahdi Mohajeri and more

Potential Business Impact:

Tests AI's ability to think deeply, not just memorize.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

LLMs often excel on standard benchmarks but falter on real-world tasks. We introduce DeepQuestion, a scalable automated framework that augments existing datasets based on Bloom's taxonomy and creates novel questions that trace original solution paths to probe evaluative and creative skills. Extensive experiments across ten open-source and proprietary models, covering both general-purpose and reasoning LLMs, reveal substantial performance drops (even up to 70% accuracy loss) on higher-order tasks, underscoring persistent gaps in deep reasoning. Our work highlights the need for cognitively diverse benchmarks to advance LLM progress. DeepQuestion and related datasets will be released upon acceptance of the paper.

Country of Origin
🇮🇷 Iran

Page Count
11 pages

Category
Computer Science:
Computation and Language