Score: 1

EDIT-Bench: Evaluating LLM Abilities to Perform Real-World Instructed Code Edits

Published: November 6, 2025 | arXiv ID: 2511.04486v1

By: Wayne Chi , Valerie Chen , Ryan Shar and more

Potential Business Impact:

Helps AI fix computer code like a person.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Instructed code editing, where LLMs directly modify a developer's existing code based on a user instruction, is becoming a widely used interaction mode in AI coding assistants. However, few benchmarks directly evaluate this capability and current datasets often rely on artificial sources. We introduce EDIT-Bench, a benchmark for evaluating LLM code editing capabilities grounded in real-world usage, i.e., user instructions and code contexts collected in the wild. EDIT-Bench comprises of 545 problems, multiple natural and programming languages, and a diverse set of real-world use cases, ranging from resolving errors to adding features. EDIT-Bench introduces context-dependent problems that require the model to understand code context, highlighted code, and cursor position in addition to the user instruction. We evaluate 40 diverse LLMs and observe that EDIT-Bench is a challenging set of problems where only 5 models score over 60%. We find that model performance varies across different categories of user instructions. Further, we find that varying levels of contextual information greatly affect task success rate, with performance varying up to 11%, indicating the importance of evaluating with realistic context.


Page Count
30 pages

Category
Computer Science:
Software Engineering