Training Versatile Coding Agents in Synthetic Environments
By: Yiqi Zhu, Apurva Gandhi, Graham Neubig
Potential Business Impact:
Teaches computers to code and fix bugs.
Prior works on training software engineering agents have explored utilizing existing resources such as issues on GitHub repositories to construct software engineering tasks and corresponding test suites. These approaches face two key limitations: (1) their reliance on pre-existing GitHub repositories offers limited flexibility, and (2) their primary focus on issue resolution tasks restricts their applicability to the much wider variety of tasks a software engineer must handle. To overcome these challenges, we introduce SWE-Playground, a novel pipeline for generating environments and trajectories which supports the training of versatile coding agents. Unlike prior efforts, SWE-Playground synthetically generates projects and tasks from scratch with strong language models and agents, eliminating reliance on external data sources. This allows us to tackle a much wider variety of coding tasks, such as reproducing issues by generating unit tests and implementing libraries from scratch. We demonstrate the effectiveness of this approach on three distinct benchmarks, and results indicate that SWE-Playground produces trajectories with dense training signal, enabling agents to reach comparable performance with significantly fewer trajectories than previous works.
Similar Papers
Sharp Tools: How Developers Wield Agentic AI in Real Software Engineering Tasks
Software Engineering
Helps computers work with people on coding tasks.
SWE-fficiency: Can Language Models Optimize Real-World Repositories on Real Workloads?
Software Engineering
Helps computers fix slow code automatically.
SWE-Dev: Building Software Engineering Agents with Training and Inference Scaling
Artificial Intelligence
Helps computers write and fix code better.