SWE-smith: Scaling Data for Software Engineering Agents
By: John Yang , Kilian Lieret , Carlos E. Jimenez and more
Potential Business Impact:
Creates lots of practice problems for AI to fix code.
Despite recent progress in Language Models (LMs) for software engineering, collecting training data remains a significant pain point. Existing datasets are small, with at most 1,000s of training instances from 11 or fewer GitHub repositories. The procedures to curate such datasets are often complex, necessitating hundreds of hours of human labor; companion execution environments also take up several terabytes of storage, severely limiting their scalability and usability. To address this pain point, we introduce SWE-smith, a novel pipeline for generating software engineering training data at scale. Given any Python codebase, SWE-smith constructs a corresponding execution environment, then automatically synthesizes 100s to 1,000s of task instances that break existing test(s) in the codebase. Using SWE-smith, we create a dataset of 50k instances sourced from 128 GitHub repositories, an order of magnitude larger than all previous works. We train SWE-agent-LM-32B, achieving 40.2% Pass@1 resolve rate on the SWE-bench Verified benchmark, state of the art among open source models. We open source SWE-smith (collection procedure, task instances, trajectories, models) to lower the barrier of entry for research in LM systems for automated software engineering. All assets available at https://swesmith.com.
Similar Papers
SWE-Dev: Building Software Engineering Agents with Training and Inference Scaling
Artificial Intelligence
Helps computers write and fix code better.
SWE-Factory: Your Automated Factory for Issue Resolution Training Data and Evaluation Benchmarks
Software Engineering
Helps computers learn to fix software bugs faster.
MLE-Smith: Scaling MLE Tasks with Automated Multi-Agent Pipeline
Machine Learning (CS)
Makes computers create hard practice problems for AI.