Automating Code Review: A Systematic Literature Review
By: Rosalia Tufano, Gabriele Bavota
Potential Business Impact:
Helps computers check code for mistakes automatically.
Code Review consists in assessing the code written by teammates with the goal of increasing code quality. Empirical studies documented the benefits brought by such a practice that, however, has its cost to pay in terms of developers' time. For this reason, researchers have proposed techniques and tools to automate code review tasks such as the reviewers selection (i.e., identifying suitable reviewers for a given code change) or the actual review of a given change (i.e., recommending improvements to the contributor as a human reviewer would do). Given the substantial amount of papers recently published on the topic, it may be challenging for researchers and practitioners to get a complete overview of the state-of-the-art. We present a systematic literature review (SLR) featuring 119 papers concerning the automation of code review tasks. We provide: (i) a categorization of the code review tasks automated in the literature; (ii) an overview of the under-the-hood techniques used for the automation, including the datasets used for training data-driven techniques; (iii) publicly available techniques and datasets used for their evaluation, with a description of the evaluation metrics usually adopted for each task. The SLR is concluded by a discussion of the current limitations of the state-of-the-art, with insights for future research directions.
Similar Papers
Automated Code Review Using Large Language Models with Symbolic Reasoning
Software Engineering
AI finds coding mistakes better by thinking logically.
Automated Unit Test Case Generation: A Systematic Literature Review
Software Engineering
Makes software testing faster and better.
Can Agents Judge Systematic Reviews Like Humans? Evaluating SLRs with LLM-based Multi-Agent System
Artificial Intelligence
Helps scientists quickly check research quality.