Does AI Code Review Lead to Code Changes? A Case Study of GitHub Actions
By: Kexin Sun , Hongyu Kuang , Sebastian Baltes and more
Potential Business Impact:
Helps computers find mistakes in code faster.
AI-based code review tools automatically review and comment on pull requests to improve code quality. Despite their growing presence, little is known about their actual impact. We present a large-scale empirical study of 16 popular AI-based code review actions for GitHub workflows, analyzing more than 22,000 review comments in 178 repositories. We investigate (1) how these tools are adopted and configured, (2) whether their comments lead to code changes, and (3) which factors influence their effectiveness. We develop a two-stage LLM-assisted framework to determine whether review comments are addressed, and use interpretable machine learning to identify influencing factors. Our findings show that, while adoption is growing, effectiveness varies widely. Comments that are concise, contain code snippets, and are manually triggered, particularly those from hunk-level review tools, are more likely to result in code changes. These results highlight the importance of careful tool design and suggest directions for improving AI-based code review systems.
Similar Papers
On the Use of Agentic Coding: An Empirical Study of Pull Requests on GitHub
Software Engineering
AI helps programmers fix code, saving them time.
Social Media Reactions to Open Source Promotions: AI-Powered GitHub Projects on Hacker News
Software Engineering
Helps AI projects get noticed and grow faster.
GitHub's Copilot Code Review: Can AI Spot Security Flaws Before You Commit?
Software Engineering
AI code checker misses big security problems.