Score: 1

Arrows of Math Reasoning Data Synthesis for Large Language Models: Diversity, Complexity and Correctness

Published: August 26, 2025 | arXiv ID: 2508.18824v1

By: Sirui Chen , Changxin Tian , Binbin Hu and more

Potential Business Impact:

Teaches computers to solve math problems better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Enhancing the mathematical reasoning of large language models (LLMs) demands high-quality training data, yet conventional methods face critical challenges in scalability, cost, and data reliability. To address these limitations, we propose a novel program-assisted synthesis framework that systematically generates a high-quality mathematical corpus with guaranteed diversity, complexity, and correctness. This framework integrates mathematical knowledge systems and domain-specific tools to create executable programs. These programs are then translated into natural language problem-solution pairs and vetted by a bilateral validation mechanism that verifies solution correctness against program outputs and ensures program-problem consistency. We have generated 12.3 million such problem-solving triples. Experiments demonstrate that models fine-tuned on our data significantly improve their inference capabilities, achieving state-of-the-art performance on several benchmark datasets and showcasing the effectiveness of our synthesis approach.

Country of Origin
🇨🇳 China

Page Count
5 pages

Category
Computer Science:
Computation and Language