Idea First, Code Later: Disentangling Problem Solving from Code Generation in Evaluating LLMs for Competitive Programming
By: Sama Hadhoud , Alaa Elsetohy , Frederikus Hudi and more
Potential Business Impact:
Helps computers solve tricky math problems better.
Large Language Models (LLMs) increasingly succeed on competitive programming problems, yet existing evaluations conflate algorithmic reasoning with code-level implementation. We argue that competitive programming is fundamentally a problem-solving task and propose centering natural-language editorials in both solution generation and evaluation. Generating an editorial prior to code improves solve rates for some LLMs, with substantially larger gains when using expertly written gold editorials. However, even with gold editorials, models continue to struggle with implementation, while the gap between generated and gold editorials reveals a persistent problem-solving bottleneck in specifying correct and complete algorithms. Beyond pass/fail metrics, we diagnose reasoning errors by comparing model-generated editorials to gold standards using expert annotations and validate an LLM-as-a-judge protocol for scalable evaluation. We introduce a dataset of 83 ICPC-style problems with gold editorials and full test suites, and evaluate 19 LLMs, arguing that future benchmarks should explicitly separate problem solving from implementation.
Similar Papers
LiveCodeBench Pro: How Do Olympiad Medalists Judge LLMs in Competitive Programming?
Software Engineering
Computers still can't solve hard math problems alone.
Holistic Evaluation of State-of-the-Art LLMs for Code Generation
Software Engineering
Makes computers write better, error-free code.
LLM-ProS: Analyzing Large Language Models' Performance in Competitive Problem Solving
Computation and Language
Tests computers to solve hard math and coding problems.