Score: 0

Cascaded Information Disclosure for Generalized Evaluation of Problem Solving Capabilities

Published: July 31, 2025 | arXiv ID: 2507.23776v1

By: Yunxiang Yan, Tomohiro Sawada, Kartik Goyal

Potential Business Impact:

Tests AI thinking, not just memorizing answers.

Business Areas:
Q&A Community and Lifestyle

While question-answering~(QA) benchmark performance is an automatic and scalable method to compare LLMs, it is an indirect method of evaluating their underlying problem-solving capabilities. Therefore, we propose a holistic and generalizable framework based on \emph{cascaded question disclosure} that provides a more accurate estimate of the models' problem-solving capabilities while maintaining the scalability and automation. This approach collects model responses in a stagewise manner with each stage revealing partial information about the question designed to elicit generalized reasoning in LLMs. We find that our approach not only provides a better comparison between LLMs, but also induces better intermediate traces in models compared to the standard QA paradigm. We empirically verify this behavior on diverse reasoning and knowledge-heavy QA datasets by comparing LLMs of varying sizes and families. Our approach narrows the performance gap observed in the standard QA evaluation settings, indicating that the prevalent indirect QA paradigm of evaluation overestimates the differences in performance between models. We further validate our findings by extensive ablation studies.

Country of Origin
🇺🇸 United States

Page Count
28 pages

Category
Computer Science:
Computation and Language