Cascaded Information Disclosure for Generalized Evaluation of Problem Solving Capabilities
By: Yunxiang Yan, Tomohiro Sawada, Kartik Goyal
Potential Business Impact:
Tests AI thinking, not just memorizing answers.
While question-answering~(QA) benchmark performance is an automatic and scalable method to compare LLMs, it is an indirect method of evaluating their underlying problem-solving capabilities. Therefore, we propose a holistic and generalizable framework based on \emph{cascaded question disclosure} that provides a more accurate estimate of the models' problem-solving capabilities while maintaining the scalability and automation. This approach collects model responses in a stagewise manner with each stage revealing partial information about the question designed to elicit generalized reasoning in LLMs. We find that our approach not only provides a better comparison between LLMs, but also induces better intermediate traces in models compared to the standard QA paradigm. We empirically verify this behavior on diverse reasoning and knowledge-heavy QA datasets by comparing LLMs of varying sizes and families. Our approach narrows the performance gap observed in the standard QA evaluation settings, indicating that the prevalent indirect QA paradigm of evaluation overestimates the differences in performance between models. We further validate our findings by extensive ablation studies.
Similar Papers
DisastQA: A Comprehensive Benchmark for Evaluating Question Answering in Disaster Management
Computation and Language
Helps computers answer questions during disasters.
DEEPQUESTION: Systematic Generation of Real-World Challenges for Evaluating LLMs Performance
Computation and Language
Tests AI's ability to think deeply, not just memorize.
DailyQA: A Benchmark to Evaluate Web Retrieval Augmented LLMs Based on Capturing Real-World Changes
Information Retrieval
Helps computers answer questions about recent news.