Brains vs. Bytes: Evaluating LLM Proficiency in Olympiad Mathematics
By: Hamed Mahdavi , Alireza Hashemi , Majid Daliri and more
Potential Business Impact:
Computers can't truly do hard math problems.
Recent advances in large language models (LLMs) have shown impressive progress in mathematical reasoning tasks. However, current evaluation benchmarks predominantly focus on the accuracy of final answers, often overlooking the crucial logical rigor for mathematical problem solving. The claim that state-of-the-art LLMs can solve Math Olympiad-level problems requires closer examination. To explore this, we conducted both qualitative and quantitative human evaluations of proofs generated by LLMs, and developed a schema for automatically assessing their reasoning capabilities. Our study reveals that current LLMs fall significantly short of solving challenging Olympiad-level problems and frequently fail to distinguish correct mathematical reasoning from clearly flawed solutions. Our analyses demonstrate that the occasional correct final answers provided by LLMs often result from pattern recognition or heuristic shortcuts rather than genuine mathematical reasoning. These findings underscore the substantial gap between LLM performance and human expertise in advanced mathematical reasoning and highlight the importance of developing benchmarks that prioritize the soundness of the reasoning used to arrive at an answer rather than the mere correctness of the final answers.
Similar Papers
Thinking Machines: Mathematical Reasoning in the Age of LLMs
Artificial Intelligence
Helps computers prove math ideas like a scientist.
Beyond Final Answers: Evaluating Large Language Models for Math Tutoring
Human-Computer Interaction
Helps computers teach math, but they make mistakes.
CogMath: Assessing LLMs' Authentic Mathematical Ability from a Human Cognitive Perspective
Artificial Intelligence
Tests how well computers do math like people.