E-Test: E'er-Improving Test Suites
By: Ketai Qiu , Luca Di Grazia , Leonardo Mariani and more
Potential Business Impact:
Finds hidden software bugs missed by current tests.
Test suites are inherently imperfect, and testers can always enrich a suite with new test cases that improve its quality and, consequently, the reliability of the target software system. However, finding test cases that explore execution scenarios beyond the scope of an existing suite can be extremely challenging and labor-intensive, particularly when managing large test suites over extended periods. In this paper, we propose E-Test, an approach that reduces the gap between the execution space explored with a test suite and the executions experienced after testing by augmenting the test suite with test cases that explore execution scenarios that emerge in production. E-Test (i) identifies executions that have not yet been tested from large sets of scenarios, such as those monitored during intensive production usage, and (ii) generates new test cases that enhance the test suite. E-Test leverages Large Language Models (LLMs) to pinpoint scenarios that the current test suite does not adequately cover, and augments the suite with test cases that execute these scenarios. Our evaluation on a dataset of 1,975 scenarios, collected from highly-starred open-source Java projects already in production and Defects4J, demonstrates that E-Test retrieves not-yet-tested execution scenarios significantly better than state-of-the-art approaches. While existing regression testing and field testing approaches for this task achieve a maximum F1-score of 0.34, and vanilla LLMs achieve a maximum F1-score of 0.39, E-Test reaches 0.55. These results highlight the impact of E-Test in enhancing test suites by effectively targeting not-yet-tested execution scenarios and reducing manual effort required for maintaining test suites.
Similar Papers
Ever-Improving Test Suite by Leveraging Large Language Models
Software Engineering
Finds bugs in software before users do.
GenIA-E2ETest: A Generative AI-Based Approach for End-to-End Test Automation
Software Engineering
Writes computer tests from plain English.
A Study on the Improvement of Code Generation Quality Using Large Language Models Leveraging Product Documentation
Software Engineering
Makes apps work right by testing them automatically.