LongCodeBench: Evaluating Coding LLMs at 1M Context Windows
By: Stefano Rando , Luca Romani , Alessio Sampieri and more
Potential Business Impact:
Tests if computers can understand long computer code.
Context lengths for models have grown rapidly, from thousands to millions of tokens in just a few years. The extreme context sizes of modern long-context models have made it difficult to construct realistic long-context benchmarks -- not only due to the cost of collecting million-context tasks but also in identifying realistic scenarios that require significant contexts. We identify code comprehension and repair as a natural testbed and challenge task for long-context models and introduce LongCodeBench (LCB), a benchmark to test LLM coding abilities in long-context scenarios. Our benchmark tests both the comprehension and repair capabilities of LCLMs in realistic and important settings by drawing from real-world GitHub issues and constructing QA (LongCodeQA) and bug fixing (LongSWE-Bench) tasks. We carefully stratify the complexity of our benchmark, enabling us to evaluate models across different scales -- ranging from Qwen2.5 14B Instruct to Google's flagship Gemini model. We find that long-context remains a weakness for all models, with performance drops such as from 29% to 3% for Claude 3.5 Sonnet, or from 70.2% to 40% for Qwen2.5.
Similar Papers
LONGCODEU: Benchmarking Long-Context Language Models on Long Code Understanding
Software Engineering
Tests if computers can understand long computer code.
LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software Engineering
Software Engineering
Tests if AI can understand huge computer programs.
LooGLE v2: Are LLMs Ready for Real World Long Dependency Challenges?
Computation and Language
Tests if computers can understand very long texts.