DEEPQUESTION: Systematic Generation of Real-World Challenges for Evaluating LLMs Performance
By: Ali Khoramfar , Ali Ramezani , Mohammad Mahdi Mohajeri and more
Potential Business Impact:
Tests AI's ability to think deeply, not just memorize.
LLMs often excel on standard benchmarks but falter on real-world tasks. We introduce DeepQuestion, a scalable automated framework that augments existing datasets based on Bloom's taxonomy and creates novel questions that trace original solution paths to probe evaluative and creative skills. Extensive experiments across ten open-source and proprietary models, covering both general-purpose and reasoning LLMs, reveal substantial performance drops (even up to 70% accuracy loss) on higher-order tasks, underscoring persistent gaps in deep reasoning. Our work highlights the need for cognitively diverse benchmarks to advance LLM progress. DeepQuestion and related datasets will be released upon acceptance of the paper.
Similar Papers
What Has Been Lost with Synthetic Evaluation?
Computation and Language
Computers can create tests for other computers.
Benchmarking Critical Questions Generation: A Challenging Reasoning Task for Large Language Models
Computation and Language
Helps computers ask smart questions to check ideas.
Toward Generalizable Evaluation in the LLM Era: A Survey Beyond Benchmarks
Computation and Language
Tests AI better as it gets smarter.