A Dataset and Preliminary Study of Using GPT-5 for Code-change Impact Analysis
By: Katharina Stengg, Christian Macho, Martin Pinzger
Potential Business Impact:
Helps computers guess which code parts break.
Understanding source code changes and their impact on other code entities is a crucial skill in software development. However, the analysis of code changes and their impact is often performed manually and therefore is time-consuming. Recent advancements in AI, and in particular large language models (LLMs) show promises to help developers in various code analysis tasks. However, the extent to which this potential can be utilized for understanding code changes and their impact is underexplored. To address this gap, we study the capabilities of GPT-5 and GPT-5-mini to predict the code entities impacted by given source code changes. We construct a dataset containing information about seed-changes, change pairs, and change types for each commit. Existing datasets lack crucial information about seed changes and impacted code entities. Our experiments evaluate the LLMs in two configurations: (1) seed-change information and the parent commit tree and (2) seed-change information, the parent commit tree, and the diff hunk of each seed change. We found that both LLMs perform poorly in the two experiments, whereas GPT-5 outperforms GPT-5-mini. Furthermore, the provision of the diff hunks helps both models to slightly improve their performance.
Similar Papers
The Impact of Large Language Models (LLMs) on Code Review Process
Software Engineering
Helps programmers fix code much faster.
From Code Foundation Models to Agents and Applications: A Comprehensive Survey and Practical Guide to Code Intelligence
Software Engineering
Helps computers write computer programs from words.
The Impact of Generative AI on Code Expertise Models: An Exploratory Study
Software Engineering
Makes computer code knowledge checks less reliable.