LoCoBench-Agent: An Interactive Benchmark for LLM Agents in Long-Context Software Engineering
By: Jielin Qiu , Zuxin Liu , Zhiwei Liu and more
Potential Business Impact:
Tests AI's ability to write complex computer code.
As large language models (LLMs) evolve into sophisticated autonomous agents capable of complex software development tasks, evaluating their real-world capabilities becomes critical. While existing benchmarks like LoCoBench~\cite{qiu2025locobench} assess long-context code understanding, they focus on single-turn evaluation and cannot capture the multi-turn interactive nature, tool usage patterns, and adaptive reasoning required by real-world coding agents. We introduce \textbf{LoCoBench-Agent}, a comprehensive evaluation framework specifically designed to assess LLM agents in realistic, long-context software engineering workflows. Our framework extends LoCoBench's 8,000 scenarios into interactive agent environments, enabling systematic evaluation of multi-turn conversations, tool usage efficiency, error recovery, and architectural consistency across extended development sessions. We also introduce an evaluation methodology with 9 metrics across comprehension and efficiency dimensions. Our framework provides agents with 8 specialized tools (file operations, search, code analysis) and evaluates them across context lengths ranging from 10K to 1M tokens, enabling precise assessment of long-context performance. Through systematic evaluation of state-of-the-art models, we reveal several key findings: (1) agents exhibit remarkable long-context robustness; (2) comprehension-efficiency trade-off exists with negative correlation, where thorough exploration increases comprehension but reduces efficiency; and (3) conversation efficiency varies dramatically across models, with strategic tool usage patterns differentiating high-performing agents. As the first long-context LLM agent benchmark for software engineering, LoCoBench-Agent establishes a rigorous foundation for measuring agent capabilities, identifying performance gaps, and advancing autonomous software development at scale.
Similar Papers
LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software Engineering
Software Engineering
Tests if AI can understand huge computer programs.
AgentLongBench: A Controllable Long Benchmark For Long-Contexts Agents via Environment Rollouts
Computation and Language
Tests AI's ability to solve tricky puzzles.
AgentLongBench: A Controllable Long Benchmark For Long-Contexts Agents via Environment Rollouts
Computation and Language
Tests AI's ability to solve puzzles with changing clues.