Score: 3

E2Edev: Benchmarking Large Language Models in End-to-End Software Development Task

Published: October 16, 2025 | arXiv ID: 2510.14509v1

By: Jingyao Liu , Chen Huang , Zhizhao Guan and more

Potential Business Impact:

Tests computer code automatically, saving time and money.

Business Areas:
Developer Tools Software

E2EDev comprises (i) a fine-grained set of user requirements, (ii) {multiple BDD test scenarios with corresponding Python step implementations for each requirement}, and (iii) a fully automated testing pipeline built on the Behave framework. To ensure its quality while reducing the annotation effort, E2EDev leverages our proposed Human-in-the-Loop Multi-Agent Annotation Framework (HITL-MAA). {By evaluating various E2ESD frameworks and LLM backbones with E2EDev}, our analysis reveals a persistent struggle to effectively solve these tasks, underscoring the critical need for more effective and cost-efficient E2ESD solutions. Our codebase and benchmark are publicly available at https://github.com/SCUNLP/E2EDev.

Country of Origin
πŸ‡¨πŸ‡³ πŸ‡ΈπŸ‡¬ China, Singapore


Page Count
52 pages

Category
Computer Science:
Software Engineering