Score: 3

TF-Bench: Evaluating Program Semantics Reasoning with Type Inference in System F

Published: September 28, 2025 | arXiv ID: 2509.23686v1

By: Yifeng He , Luning Yang , Christopher Castro Gaw Gonzalo and more

Potential Business Impact:

Tests if computers truly understand code.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) are increasingly integrated into the software engineering ecosystem. Their test-time compute (TTC) reasoning capabilities show significant potential for understanding program logic and semantics beyond mere token recognition. However, current benchmarks for code reasoning lack a formal, program-centric deductive framework to ensure sound evaluation, and are incapable of assessing whether models genuinely reason about program semantics or merely exploit superficial associations between natural language and code tokens. To bridge this gap, we introduce TF-Bench, a benchmark designed to evaluate LLM reasoning based on type inference in System F, a task we refer to as program semantics reasoning. By employing verified transformations to remove semantically irrelevant natural language, we construct TF-Bench_pure, a purely semantics-driven variant of TF-Bench. Our analysis reveals substantial limitations in state-of-the-art LLMs, with the best-performing LLM (Claude-3.7-sonnet) achieving only 55.85% accuracy on TF-Bench_pure. Additionally, we propose two novel metrics to assess robustness and the effectiveness of test-time reasoning, underscoring critical limitations in current LLM capabilities and highlighting essential directions for future research.

Country of Origin
πŸ‡­πŸ‡° πŸ‡ΊπŸ‡Έ Hong Kong, United States

Repos / Data Links

Page Count
28 pages

Category
Computer Science:
Computation and Language