Score: 0

Assessing Code Understanding in LLMs

Published: March 31, 2025 | arXiv ID: 2504.00065v1

By: Cosimo Laneve , Alvise Spanò , Dalila Ressi and more

Potential Business Impact:

Helps computers understand code changes better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present an empirical evaluation of Large Language Models in code understanding associated with non-trivial, semantic-preserving program transformations such as copy propagation or constant folding. Our findings show that LLMs fail to judge semantic equivalence in approximately 41\% of cases when no context is provided and in 29\% when given a simple generic context. To improve accuracy, we advocate integrating LLMs with code-optimization tools to enhance training and facilitate more robust program understanding.

Country of Origin
🇮🇹 Italy

Page Count
22 pages

Category
Computer Science:
Software Engineering