FrontierCS: Evolving Challenges for Evolving Intelligence
By: Qiuyang Mang , Wenhao Chai , Zhifei Li and more
Potential Business Impact:
Tests if computers can solve hard, new problems.
We introduce FrontierCS, a benchmark of 156 open-ended problems across diverse areas of computer science, designed and reviewed by experts, including CS PhDs and top-tier competitive programming participants and problem setters. Unlike existing benchmarks that focus on tasks with known optimal solutions, FrontierCS targets problems where the optimal solution is unknown, but the quality of a solution can be objectively evaluated. Models solve these tasks by implementing executable programs rather than outputting a direct answer. FrontierCS includes algorithmic problems, which are often NP-hard variants of competitive programming problems with objective partial scoring, and research problems with the same property. For each problem we provide an expert reference solution and an automatic evaluator. Combining open-ended design, measurable progress, and expert curation, FrontierCS provides a benchmark at the frontier of computer-science difficulty. Empirically, we find that frontier reasoning models still lag far behind human experts on both the algorithmic and research tracks, that increasing reasoning budgets alone does not close this gap, and that models often over-optimize for generating merely workable code instead of discovering high-quality algorithms and system designs.
Similar Papers
FrontierScience: Evaluating AI's Ability to Perform Expert-Level Scientific Tasks
Artificial Intelligence
Tests if AI can do hard science problems.
FormulaOne: Measuring the Depth of Algorithmic Reasoning Beyond Competitive Programming
Artificial Intelligence
Tests AI's real-world problem-solving skills.
ReasoningWeekly: A General Knowledge and Verbal Reasoning Challenge for Large Language Models
Artificial Intelligence
Tests AI with puzzles anyone can understand.