Score: 2

Precisely Detecting Python Type Errors via LLM-based Unit Test Generation

Published: July 3, 2025 | arXiv ID: 2507.02318v1

By: Chen Yang , Ziqi Wang , Yanjie Jiang and more

BigTech Affiliations: Huawei

Potential Business Impact:

Finds hidden bugs in computer code.

Business Areas:
Intrusion Detection Information Technology, Privacy and Security

Type errors in Python often lead to runtime failures, posing significant challenges to software reliability and developer productivity. Existing static analysis tools aim to detect such errors without execution but frequently suffer from high false positive rates. Recently, unit test generation techniques offer great promise in achieving high test coverage, but they often struggle to produce bug-revealing tests without tailored guidance. To address these limitations, we present RTED, a novel type-aware test generation technique for automatically detecting Python type errors. Specifically, RTED combines step-by-step type constraint analysis with reflective validation to guide the test generation process and effectively suppress false positives. We evaluated RTED on two widely-used benchmarks, BugsInPy and TypeBugs. Experimental results show that RTED can detect 22-29 more benchmarked type errors than four state-of-the-art techniques. RTED is also capable of producing fewer false positives, achieving an improvement of 173.9%-245.9% in precision. Furthermore, RTED successfully discovered 12 previously unknown type errors from six real-world open-source Python projects.

Country of Origin
🇨🇳 China

Page Count
12 pages

Category
Computer Science:
Software Engineering