A Survey on Large Language Models for Mathematical Reasoning
By: Peng-Yuan Wang , Tian-Shuo Liu , Chenyang Wang and more
Potential Business Impact:
Helps computers solve math problems like a person.
Mathematical reasoning has long represented one of the most fundamental and challenging frontiers in artificial intelligence research. In recent years, large language models (LLMs) have achieved significant advances in this area. This survey examines the development of mathematical reasoning abilities in LLMs through two high-level cognitive phases: comprehension, where models gain mathematical understanding via diverse pretraining strategies, and answer generation, which has progressed from direct prediction to step-by-step Chain-of-Thought (CoT) reasoning. We review methods for enhancing mathematical reasoning, ranging from training-free prompting to fine-tuning approaches such as supervised fine-tuning and reinforcement learning, and discuss recent work on extended CoT and "test-time scaling". Despite notable progress, fundamental challenges remain in terms of capacity, efficiency, and generalization. To address these issues, we highlight promising research directions, including advanced pretraining and knowledge augmentation techniques, formal reasoning frameworks, and meta-generalization through principled learning paradigms. This survey tries to provide some insights for researchers interested in enhancing reasoning capabilities of LLMs and for those seeking to apply these techniques to other domains.
Similar Papers
A Survey on Mathematical Reasoning and Optimization with Large Language Models
Artificial Intelligence
AI learns to solve math problems better.
Advancing Reasoning in Large Language Models: Promising Methods and Approaches
Computation and Language
Makes computers think better and solve harder problems.
Logical Reasoning in Large Language Models: A Survey
Artificial Intelligence
Makes AI better at solving puzzles and thinking logically.