LLM-based Unit Test Generation for Dynamically-Typed Programs
By: Runlin Liu , Zhe Zhang , Yunge Hu and more
Potential Business Impact:
Makes computer tests work better for tricky code.
Automated unit test generation has been widely studied, but generating effective tests for dynamically typed programs remains a significant challenge. Existing approaches, including search-based software testing (SBST) and recent LLM-based methods, often suffer from type errors, leading to invalid inputs and assertion failures, ultimately reducing testing effectiveness. To address this, we propose TypeTest, a novel framework that enhances type correctness in test generation through a vector-based Retrieval-Augmented Generation (RAG) system. TypeTest employs call instance retrieval and feature-based retrieval to infer parameter types accurately and construct valid test inputs. Furthermore, it utilizes the call graph to extract richer contextual information, enabling more accurate assertion generation. In addition, TypeTest incorporates a repair mechanism and iterative test generation, progressively refining test cases to improve coverage. In an evaluation on 125 real-world Python modules, TypeTest achieved an average statement coverage of 86.6% and branch coverage of 76.8%, outperforming state-of-theart tools by 5.4% and 9.3%, respectively.
Similar Papers
Precisely Detecting Python Type Errors via LLM-based Unit Test Generation
Software Engineering
Finds hidden bugs in computer code.
Combining Type Inference and Automated Unit Test Generation for Python
Software Engineering
Helps programs find errors by watching them run.
Improving Retrieval-Augmented Deep Assertion Generation via Joint Training
Software Engineering
Makes computer code checks more accurate.