Score: 2

Brains vs. Bytes: Evaluating LLM Proficiency in Olympiad Mathematics

Published: April 1, 2025 | arXiv ID: 2504.01995v2

By: Hamed Mahdavi , Alireza Hashemi , Majid Daliri and more

Potential Business Impact:

Computers can't truly do hard math problems.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Recent advances in large language models (LLMs) have shown impressive progress in mathematical reasoning tasks. However, current evaluation benchmarks predominantly focus on the accuracy of final answers, often overlooking the crucial logical rigor for mathematical problem solving. The claim that state-of-the-art LLMs can solve Math Olympiad-level problems requires closer examination. To explore this, we conducted both qualitative and quantitative human evaluations of proofs generated by LLMs, and developed a schema for automatically assessing their reasoning capabilities. Our study reveals that current LLMs fall significantly short of solving challenging Olympiad-level problems and frequently fail to distinguish correct mathematical reasoning from clearly flawed solutions. Our analyses demonstrate that the occasional correct final answers provided by LLMs often result from pattern recognition or heuristic shortcuts rather than genuine mathematical reasoning. These findings underscore the substantial gap between LLM performance and human expertise in advanced mathematical reasoning and highlight the importance of developing benchmarks that prioritize the soundness of the reasoning used to arrive at an answer rather than the mere correctness of the final answers.

Country of Origin
🇮🇷 🇺🇸 🇮🇹 Iran, United States, Italy

Page Count
33 pages

Category
Computer Science:
Artificial Intelligence