Score: 1

Benchmarking and Revisiting Code Generation Assessment: A Mutation-Based Approach

Published: May 11, 2025 | arXiv ID: 2505.06880v1

By: Longtian Wang , Tianlin Li , Xiaofei Xie and more

Potential Business Impact:

Makes AI better at writing computer code.

Business Areas:
A/B Testing Data and Analytics

Code Large Language Models (CLLMs) have exhibited outstanding performance in program synthesis, attracting the focus of the research community. The evaluation of CLLM's program synthesis capability has generally relied on manually curated benchmarks. However, there is a substantial gap between real-world scenarios and benchmark settings. Existing benchmarks typically provide only a single input prompt for the evaluation of each synthesis problem. However, in practice, a problem can be described in various ways, including with typos, where developers may struggle to understand certain descriptions and seek clarification to find more suitable wording. Such various descriptions may lead to variations in the performance of CLLMs on the same question, resulting in a biased evaluation when using existing benchmarks. In this paper, we aim to explore these pitfalls with the goal of revisiting and enhancing future benchmark designs. To simulate real-world variations in problem descriptions, we propose 10 mutation strategies and introduce three new metrics to evaluate their impact on code generation. We then assess five popular CLLMs using 12,834 generated prompt variants, and found a significant performance discrepancy between the results from existing benchmarks and those from mutated benchmarks containing perturbations and variations. This finding underscores the need for more robust evaluation methods and benchmarks.

Country of Origin
πŸ‡ΈπŸ‡¬ πŸ‡¨πŸ‡³ China, Singapore

Page Count
8 pages

Category
Computer Science:
Software Engineering