Increasing LLM Coding Capabilities through Diverse Synthetic Coding Tasks
By: Amal Abed , Ivan Lukic , Jörg K. H. Franke and more
Potential Business Impact:
Teaches computers to code by showing thinking steps.
Large language models (LLMs) have shown impressive promise in code generation, yet their progress remains limited by the shortage of large-scale datasets that are both diverse and well-aligned with human reasoning. Most existing resources pair problems with solutions, but omit the intermediate thought process that guides coding. To close this gap, we present a scalable synthetic data generation pipeline that produces nearly 800k instruction-reasoning-code-test quadruplets. Each sample combines a task, a step-by-step reasoning trace, a working solution, and executable tests, enabling models to learn not just the what but also the how of problem solving. Our pipeline combines four key components: curated contest problems, web-mined content filtered by relevance classifiers, data expansion guided by reasoning patterns, and multi-stage execution-based validation. A genetic mutation algorithm further increases task diversity while maintaining consistency between reasoning traces and code implementations. Our key finding is that fine-tuning LLMs on this dataset yields consistent improvements on coding benchmarks. Beyond raw accuracy, reasoning-aware data can substitute for model scaling, generalize across architectures, and outperform leading open-source alternatives under identical sample budgets. Our work establishes reasoning-centered synthetic data generation as an efficient approach for advancing coding capabilities in LLMs. We publish our dataset and generation pipeline to facilitate further research.
Similar Papers
Arrows of Math Reasoning Data Synthesis for Large Language Models: Diversity, Complexity and Correctness
Computation and Language
Teaches computers to solve math problems better.
Synthesis by Design: Controlled Data Generation via Structural Guidance
Computation and Language
Teaches computers to solve harder math problems.
On Code-Induced Reasoning in LLMs
Computation and Language
Code's structure helps computers think better than its meaning.