Mutation-Guided Unit Test Generation with a Large Language Model
By: Guancheng Wang , Qinghua Xu , Lionel C. Briand and more
Potential Business Impact:
Finds more software bugs with smarter tests.
Unit tests play a vital role in uncovering potential faults in software. While tools like EvoSuite focus on maximizing code coverage, recent advances in large language models (LLMs) have shifted attention toward LLM-based test generation. However, code coverage metrics -- such as line and branch coverage -- remain overly emphasized in reported research, despite being weak indicators of a test suite's fault-detection capability. In contrast, mutation score offers a more reliable and stringent measure, as demonstrated in our findings where some test suites achieve 100% coverage but only 4% mutation score. Although a few studies consider mutation score, the effectiveness of LLMs in killing mutants remains underexplored. In this paper, we propose MUTGEN, a mutation-guided, LLM-based test generation approach that incorporates mutation feedback directly into the prompt. Evaluated on 204 subjects from two benchmarks, MUTGEN significantly outperforms both EvoSuite and vanilla prompt-based strategies in terms of mutation score. Furthermore, MUTGEN introduces an iterative generation mechanism that pushes the limits of LLMs in killing additional mutants. Our study also provide insights into the limitations of LLM-based generation, analyzing the reasons for live and uncovered mutants, and the impact of different mutation operators on generation effectiveness.
Similar Papers
EvoGPT: Enhancing Test Suite Robustness via LLM-Based Generation and Genetic Optimization
Software Engineering
Finds bugs in computer code better.
Mutation Testing via Iterative Large Language Model-Driven Scientific Debugging
Software Engineering
Helps computers find bugs by writing better tests.
Benchmarking and Revisiting Code Generation Assessment: A Mutation-Based Approach
Software Engineering
Makes AI better at writing computer code.