Evaluating Code Reasoning Abilities of Large Language Models Under Real-World Settings
By: Changshu Liu , Alireza Ghazanfari , Yang Chen and more
Code reasoning tasks are becoming prevalent in large language model (LLM) assessments. Existing benchmarks involve simple programs, failing to represent real-world complexities such as inter- or intra-procedural dependencies, core or third-party API calls, highly nested constructs, and non-primitive complex types. Evaluating LLMs under such a simplistic setting poses a significant threat to assumptions about their generalizability in practice. To enable a more realistic evaluation of code reasoning, this paper proposes RE2-Bench, a benchmark of 1,101 reasoning problems, including 195 drawn from mature real-world projects. RE2-Bench leverages static and dynamic program analysis to automatically serialize and deserialize compound, complex, and custom types in real-world code, going far beyond the primitive-only settings used in prior work. A key feature of RE2-Bench is categorizing each reasoning problem as Easy or Hard via a principled majority-vote mechanism over nine interpretable code complexity metrics, resulting in two well-separated and semantically meaningful difficulty categories suitable for precise calibration of LLM reasoning ability. A comprehensive evaluation of six general-purpose and reasoning-oriented LLMs on two widely used code reasoning tasks -- input prediction and output prediction -- using RE2-Bench reveals a significant performance drop from Easy to Hard problems (51.50\% for input prediction and 42.15\% for output prediction), confirming that prior evaluations substantially overestimate the reasoning capabilities of LLMs.
Similar Papers
ReasonBENCH: Benchmarking the (In)Stability of LLM Reasoning
Artificial Intelligence
Tests if AI's thinking is reliable.
TF-Bench: Evaluating Program Semantics Reasoning with Type Inference in System F
Computation and Language
Tests if computers truly understand code.
RiddleBench: A New Generative Reasoning Benchmark for LLMs
Computation and Language
Tests AI's smart thinking, finds it struggles.