Score: 1

LEMMA: Learning from Errors for MatheMatical Advancement in LLMs

Published: March 21, 2025 | arXiv ID: 2503.17439v2

By: Zhuoshi Pan , Yu Li , Honglin Lin and more

Potential Business Impact:

Teaches computers to learn from math mistakes.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) have demonstrated remarkable reasoning capability in solving mathematical problems. However, existing approaches primarily focus on improving the quality of correct training data, e.g., distilling high-quality correct solutions from advanced models, neglecting the value contained in error data, potentially hindering the model's reflective ability. Though some studies attempt to leverage error data, they often involve complex mechanisms, such as Monte Carlo Tree Search (MCTS) to explore error nodes. In this work, we propose to enhance LLMs' reasoning ability by Learning from Errors for Mathematical Advancement (LEMMA). LEMMA constructs data consisting of an incorrect solution with an erroneous step and a reflection connection to a correct solution for fine-tuning. Specifically, we systematically analyze the model-generated error types and introduce an error-type grounded mistake augmentation method to collect diverse and representative errors. Correct solutions are either from fixing the errors or generating a fresh start. Through a model-aware smooth reflection connection, the erroneous solution is transferred to the correct one. By fine-tuning on the constructed dataset, the model is able to self-correct errors autonomously within the generation process without relying on external critique models. Experimental results demonstrate that LEMMA achieves significant performance improvements over other strong baselines.

Repos / Data Links

Page Count
25 pages

Category
Computer Science:
Machine Learning (CS)