Precisely Detecting Python Type Errors via LLM-based Unit Test Generation
By: Chen Yang , Ziqi Wang , Yanjie Jiang and more
Potential Business Impact:
Finds hidden bugs in computer code.
Type errors in Python often lead to runtime failures, posing significant challenges to software reliability and developer productivity. Existing static analysis tools aim to detect such errors without execution but frequently suffer from high false positive rates. Recently, unit test generation techniques offer great promise in achieving high test coverage, but they often struggle to produce bug-revealing tests without tailored guidance. To address these limitations, we present RTED, a novel type-aware test generation technique for automatically detecting Python type errors. Specifically, RTED combines step-by-step type constraint analysis with reflective validation to guide the test generation process and effectively suppress false positives. We evaluated RTED on two widely-used benchmarks, BugsInPy and TypeBugs. Experimental results show that RTED can detect 22-29 more benchmarked type errors than four state-of-the-art techniques. RTED is also capable of producing fewer false positives, achieving an improvement of 173.9%-245.9% in precision. Furthermore, RTED successfully discovered 12 previously unknown type errors from six real-world open-source Python projects.
Similar Papers
LLM-based Unit Test Generation for Dynamically-Typed Programs
Software Engineering
Makes computer tests work better for tricky code.
Bugs in the Shadows: Static Detection of Faulty Python Refactorings
Software Engineering
Finds mistakes when fixing computer code.
Combining Type Inference and Automated Unit Test Generation for Python
Software Engineering
Helps programs find errors by watching them run.