Automated Code Review Using Large Language Models with Symbolic Reasoning
By: Busra Icoz, Goksel Biricik
Potential Business Impact:
AI finds coding mistakes better by thinking logically.
Code review is one of the key processes in the software development lifecycle and is essential to maintain code quality. However, manual code review is subjective and time consuming. Given its rule-based nature, code review is well suited for automation. In recent years, significant efforts have been made to automate this process with the help of artificial intelligence. Recent developments in Large Language Models (LLMs) have also emerged as a promising tool in this area, but these models often lack the logical reasoning capabilities needed to fully understand and evaluate code. To overcome this limitation, this study proposes a hybrid approach that integrates symbolic reasoning techniques with LLMs to automate the code review process. We tested our approach using the CodexGlue dataset, comparing several models, including CodeT5, CodeBERT, and GraphCodeBERT, to assess the effectiveness of combining symbolic reasoning and prompting techniques with LLMs. Our results show that this approach improves the accuracy and efficiency of automated code review.
Similar Papers
LAURA: Enhancing Code Review Generation with Context-Enriched Retrieval-Augmented LLM
Software Engineering
Helps computers write better code suggestions.
Code Review Without Borders: Evaluating Synthetic vs. Real Data for Review Recommendation
Software Engineering
Teaches computers to check new code automatically.
Exploring the Potential of Large Language Models in Fine-Grained Review Comment Classification
Software Engineering
Helps computers understand code feedback better.