More Data or Better Data? A Critical Analysis of Data Selection and Synthesis for Mathematical Reasoning
By: Yike Zhao , Simin Guo , Ziqing Yang and more
Potential Business Impact:
Improves AI math skills with better, not just more, data.
The reasoning capabilities of Large Language Models (LLMs) play a critical role in many downstream tasks, yet depend strongly on the quality of training data. Despite various proposed data construction methods, their practical utility in real-world pipelines remains underexplored. In this work, we conduct a comprehensive analysis of open-source datasets and data synthesis techniques for mathematical reasoning, evaluating them under a unified pipeline designed to mirror training and deployment scenarios. We further distill effective data selection strategies and identify practical methods suitable for industrial applications. Our findings highlight that structuring data in more interpretable formats, or distilling from stronger models often outweighs simply scaling up data volume. This study provides actionable guidance for integrating training data to enhance LLM capabilities, supporting both cost-effective data curation and scalable model enhancement. We hope this work will inspire further research on how to balance "more data" versus "better data" for real-world reasoning tasks.
Similar Papers
Arrows of Math Reasoning Data Synthesis for Large Language Models: Diversity, Complexity and Correctness
Computation and Language
Teaches computers to solve math problems better.
Synthesis by Design: Controlled Data Generation via Structural Guidance
Computation and Language
Teaches computers to solve harder math problems.
A Survey on Large Language Models for Mathematical Reasoning
Artificial Intelligence
Helps computers solve math problems like a person.