TypyBench: Evaluating LLM Type Inference for Untyped Python Repositories
By: Honghua Dong , Jiacheng Yang , Xun Deng and more
Potential Business Impact:
Helps computers understand Python code types better.
Type inference for dynamic languages like Python is a persistent challenge in software engineering. While large language models (LLMs) have shown promise in code understanding, their type inference capabilities remain underexplored. We introduce TypyBench, a benchmark designed to evaluate LLMs' type inference across entire Python repositories. TypyBench features two novel metrics: TypeSim, which captures nuanced semantic relationships between predicted and ground truth types, and TypeCheck, which assesses type consistency across codebases. Our evaluation of various LLMs on a curated dataset of 50 high-quality Python repositories reveals that, although LLMs achieve decent TypeSim scores, they struggle with complex nested types and exhibit significant type consistency errors. These findings suggest that future research should shift focus from improving type similarity to addressing repository-level consistency. TypyBench provides a foundation for this new direction, offering insights into model performance across different type complexities and usage contexts. Our code and data are available at https://github.com/typybench/typybench.
Similar Papers
Beyond Memorization: Evaluating the True Type Inference Capabilities of LLMs for Java Code Snippets
Software Engineering
Helps computers understand code better, not just copy.
DI-BENCH: Benchmarking Large Language Models on Dependency Inference with Testable Repositories at Scale
Computation and Language
Helps computers build programs without errors.
TF-Bench: Evaluating Program Semantics Reasoning with Type Inference in System F
Computation and Language
Tests if computers truly understand code.