CodeContests+: High-Quality Test Case Generation for Competitive Programming
By: Zihan Wang , Siyao Liu , Yang Sun and more
Potential Business Impact:
Makes computer programs test themselves better.
Competitive programming, due to its high reasoning difficulty and precise correctness feedback, has become a key task for both training and evaluating the reasoning capabilities of large language models (LLMs). However, while a large amount of public problem data, such as problem statements and solutions, is available, the test cases of these problems are often difficult to obtain. Therefore, test case generation is a necessary task for building large-scale datasets, and the quality of the test cases directly determines the accuracy of the evaluation. In this paper, we introduce an LLM-based agent system that creates high-quality test cases for competitive programming problems. We apply this system to the CodeContests dataset and propose a new version with improved test cases, named CodeContests+. We evaluated the quality of test cases in CodeContestsPlus. First, we used 1.72 million submissions with pass/fail labels to examine the accuracy of these test cases in evaluation. The results indicated that CodeContests+ achieves significantly higher accuracy than CodeContests, particularly with a notably higher True Positive Rate (TPR). Subsequently, our experiments in LLM Reinforcement Learning (RL) further confirmed that improvements in test case quality yield considerable advantages for RL.
Similar Papers
Can LLMs Generate Reliable Test Case Generators? A Study on Competition-Level Programming Problems
Computation and Language
Helps computers find bugs in other computer code.
Can LLMs Generate High-Quality Test Cases for Algorithm Problems? TestCase-Eval: A Systematic Evaluation of Fault Coverage and Exposure
Software Engineering
Tests computer code to find mistakes.
Automatic High-Level Test Case Generation using Large Language Models
Software Engineering
Helps computers write tests that match what businesses want.