NL2Repo-Bench: Towards Long-Horizon Repository Generation Evaluation of Coding Agents
By: Jingzhe Ding , Shengda Long , Changxin Pu and more
Potential Business Impact:
Tests if AI can build whole computer programs alone.
Recent advances in coding agents suggest rapid progress toward autonomous software development, yet existing benchmarks fail to rigorously evaluate the long-horizon capabilities required to build complete software systems. Most prior evaluations focus on localized code generation, scaffolded completion, or short-term repair tasks, leaving open the question of whether agents can sustain coherent reasoning, planning, and execution over the extended horizons demanded by real-world repository construction. To address this gap, we present NL2Repo Bench, a benchmark explicitly designed to evaluate the long-horizon repository generation ability of coding agents. Given only a single natural-language requirements document and an empty workspace, agents must autonomously design the architecture, manage dependencies, implement multi-module logic, and produce a fully installable Python library. Our experiments across state-of-the-art open- and closed-source models reveal that long-horizon repository generation remains largely unsolved: even the strongest agents achieve below 40% average test pass rates and rarely complete an entire repository correctly. Detailed analysis uncovers fundamental long-horizon failure modes, including premature termination, loss of global coherence, fragile cross-file dependencies, and inadequate planning over hundreds of interaction steps. NL2Repo Bench establishes a rigorous, verifiable testbed for measuring sustained agentic competence and highlights long-horizon reasoning as a central bottleneck for the next generation of autonomous coding agents.
Similar Papers
Curriculum Guided Massive Multi Agent System Solving For Robust Long Horizon Tasks
Computation and Language
Helps robots solve hard, long problems together.
OdysseyBench: Evaluating LLM Agents on Long-Horizon Complex Office Application Workflows
Computation and Language
Tests smart computer helpers on office tasks.
LoCoBench-Agent: An Interactive Benchmark for LLM Agents in Long-Context Software Engineering
Software Engineering
Tests AI's ability to write complex computer code.