Score: 3

CodeContests+: High-Quality Test Case Generation for Competitive Programming

Published: June 6, 2025 | arXiv ID: 2506.05817v1

By: Zihan Wang , Siyao Liu , Yang Sun and more

BigTech Affiliations: ByteDance

Potential Business Impact:

Makes computer programs test themselves better.

Business Areas:
Contests Gaming

Competitive programming, due to its high reasoning difficulty and precise correctness feedback, has become a key task for both training and evaluating the reasoning capabilities of large language models (LLMs). However, while a large amount of public problem data, such as problem statements and solutions, is available, the test cases of these problems are often difficult to obtain. Therefore, test case generation is a necessary task for building large-scale datasets, and the quality of the test cases directly determines the accuracy of the evaluation. In this paper, we introduce an LLM-based agent system that creates high-quality test cases for competitive programming problems. We apply this system to the CodeContests dataset and propose a new version with improved test cases, named CodeContests+. We evaluated the quality of test cases in CodeContestsPlus. First, we used 1.72 million submissions with pass/fail labels to examine the accuracy of these test cases in evaluation. The results indicated that CodeContests+ achieves significantly higher accuracy than CodeContests, particularly with a notably higher True Positive Rate (TPR). Subsequently, our experiments in LLM Reinforcement Learning (RL) further confirmed that improvements in test case quality yield considerable advantages for RL.

Country of Origin
🇨🇳 China


Page Count
28 pages

Category
Computer Science:
Software Engineering