LLMs cannot spot math errors, even when allowed to peek into the solution
By: KV Aditya Srivatsa, Kaushal Kumar Maurya, Ekaterina Kochmar
Potential Business Impact:
Helps computers find mistakes in math problems.
Large language models (LLMs) demonstrate remarkable performance on math word problems, yet they have been shown to struggle with meta-reasoning tasks such as identifying errors in student solutions. In this work, we investigate the challenge of locating the first error step in stepwise solutions using two error reasoning datasets: VtG and PRM800K. Our experiments show that state-of-the-art LLMs struggle to locate the first error step in student solutions even when given access to the reference solution. To that end, we propose an approach that generates an intermediate corrected student solution, aligning more closely with the original student's solution, which helps improve performance.
Similar Papers
Mathematical Computation and Reasoning Errors by Large Language Models
Artificial Intelligence
AI learns math better, helps students learn.
Mathematical Computation and Reasoning Errors by Large Language Models
Artificial Intelligence
AI learns math better with step-by-step checks.
LEMMA: Learning from Errors for MatheMatical Advancement in LLMs
Machine Learning (CS)
Teaches computers to learn from math mistakes.