Saving SWE-Bench: A Benchmark Mutation Approach for Realistic Agent Evaluation
By: Spandan Garg, Benjamin Steenhoek, Yufan Huang
Potential Business Impact:
Tests AI coding helpers more realistically.
Current benchmarks for evaluating software engineering agents, such as SWE-Bench Verified, are predominantly derived from GitHub issues and fail to accurately reflect how developers interact with chat-based coding assistants in integrated development environments (IDEs). We posit that this mismatch leads to a systematic overestimation of agent's capabilities in real-world scenarios, especially bug fixing. We introduce a novel benchmarking framework that transforms existing formal benchmarks into realistic user queries through systematic analysis of developer interaction patterns with chat-based agents. Our methodology is flexible and can be easily extended to existing benchmarks. In this paper, we apply our testing framework to SWE-Bench Verified, the TypeScript subset of Multi-SWE-Bench and a private benchmark, SWE-Bench C# and transform formal GitHub issue descriptions into realistic user-style queries based on telemetry analysis of a popular chat-based agent interactions. Our findings reveal that existing benchmarks significantly overestimate agent capabilities for some models by >50% over baseline performance for public benchmarks and ~10-16% for our internal benchmark. This work establishes a new paradigm for evaluating interactive chat-based software engineering agents through benchmark mutation techniques.
Similar Papers
Saving SWE-Bench: A Benchmark Mutation Approach for Realistic Agent Evaluation
Software Engineering
Tests AI coders more like real people.
SWE-PolyBench: A multi-language benchmark for repository level evaluation of coding agents
Software Engineering
Tests AI that writes computer code better.
Automated Benchmark Generation for Repository-Level Coding Tasks
Software Engineering
Tests computer code fixes on many more projects.