Score: 3

LongCodeBench: Evaluating Coding LLMs at 1M Context Windows

Published: May 12, 2025 | arXiv ID: 2505.07897v2

By: Stefano Rando , Luca Romani , Alessio Sampieri and more

BigTech Affiliations: Stanford University

Potential Business Impact:

Tests if computers can understand long computer code.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Context lengths for models have grown rapidly, from thousands to millions of tokens in just a few years. The extreme context sizes of modern long-context models have made it difficult to construct realistic long-context benchmarks -- not only due to the cost of collecting million-context tasks but also in identifying realistic scenarios that require significant contexts. We identify code comprehension and repair as a natural testbed and challenge task for long-context models and introduce LongCodeBench (LCB), a benchmark to test LLM coding abilities in long-context scenarios. Our benchmark tests both the comprehension and repair capabilities of LCLMs in realistic and important settings by drawing from real-world GitHub issues and constructing QA (LongCodeQA) and bug fixing (LongSWE-Bench) tasks. We carefully stratify the complexity of our benchmark, enabling us to evaluate models across different scales -- ranging from Qwen2.5 14B Instruct to Google's flagship Gemini model. We find that long-context remains a weakness for all models, with performance drops such as from 29% to 3% for Claude 3.5 Sonnet, or from 70.2% to 40% for Qwen2.5.

Country of Origin
🇮🇹 🇺🇸 United States, Italy

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
Computation and Language