OctoBench: Benchmarking Scaffold-Aware Instruction Following in Repository-Grounded Agentic Coding
By: Deming Ding , Shichun Liu , Enhui Yang and more
Modern coding scaffolds turn LLMs into capable software agents, but their ability to follow scaffold-specified instructions remains under-examined, especially when constraints are heterogeneous and persist across interactions. To fill this gap, we introduce OctoBench, which benchmarks scaffold-aware instruction following in repository-grounded agentic coding. OctoBench includes 34 environments and 217 tasks instantiated under three scaffold types, and is paired with 7,098 objective checklist items. To disentangle solving the task from following the rules, we provide an automated observation-and-scoring toolkit that captures full trajectories and performs fine-grained checks. Experiments on eight representative models reveal a systematic gap between task-solving and scaffold-aware compliance, underscoring the need for training and evaluation that explicitly targets heterogeneous instruction following. We release the benchmark to support reproducible benchmarking and to accelerate the development of more scaffold-aware coding agents.
Similar Papers
CodeAlignBench: Assessing Code Generation Models on Developer-Preferred Code Adjustments
Software Engineering
Tests if AI can write code correctly.
NL2Repo-Bench: Towards Long-Horizon Repository Generation Evaluation of Coding Agents
Computation and Language
Tests if AI can build whole computer programs alone.
From Laboratory to Real-World Applications: Benchmarking Agentic Code Reasoning at the Repository Level
Software Engineering
Helps AI understand and fix complex computer code.