Score: 0

CogMath: Assessing LLMs' Authentic Mathematical Ability from a Human Cognitive Perspective

Published: June 4, 2025 | arXiv ID: 2506.04481v1

By: Jiayu Liu , Zhenya Huang , Wei Dai and more

Potential Business Impact:

Tests how well computers do math like people.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Although large language models (LLMs) show promise in solving complex mathematical tasks, existing evaluation paradigms rely solely on a coarse measure of overall answer accuracy, which are insufficient for assessing their authentic capabilities. In this paper, we propose \textbf{CogMath}, which comprehensively assesses LLMs' mathematical abilities through the lens of human cognition. Specifically, inspired by psychological theories, CogMath formalizes human reasoning process into 3 stages: \emph{problem comprehension}, \emph{problem solving}, and \emph{solution summarization}. Within these stages, we investigate perspectives such as numerical calculation, knowledge, and counterfactuals, and design a total of 9 fine-grained evaluation dimensions. In each dimension, we develop an ``\emph{Inquiry}-\emph{Judge}-\emph{Reference}'' multi-agent system to generate inquiries that assess LLMs' mastery from this dimension. An LLM is considered to truly master a problem only when excelling in all inquiries from the 9 dimensions. By applying CogMath on three benchmarks, we reveal that the mathematical capabilities of 7 mainstream LLMs are overestimated by 30\%-40\%. Moreover, we locate their strengths and weaknesses across specific stages/dimensions, offering in-depth insights to further enhance their reasoning abilities.

Country of Origin
🇨🇳 China

Page Count
16 pages

Category
Computer Science:
Artificial Intelligence