Score: 0

Automated Code Review Using Large Language Models with Symbolic Reasoning

Published: July 24, 2025 | arXiv ID: 2507.18476v1

By: Busra Icoz, Goksel Biricik

Potential Business Impact:

AI finds coding mistakes better by thinking logically.

Code review is one of the key processes in the software development lifecycle and is essential to maintain code quality. However, manual code review is subjective and time consuming. Given its rule-based nature, code review is well suited for automation. In recent years, significant efforts have been made to automate this process with the help of artificial intelligence. Recent developments in Large Language Models (LLMs) have also emerged as a promising tool in this area, but these models often lack the logical reasoning capabilities needed to fully understand and evaluate code. To overcome this limitation, this study proposes a hybrid approach that integrates symbolic reasoning techniques with LLMs to automate the code review process. We tested our approach using the CodexGlue dataset, comparing several models, including CodeT5, CodeBERT, and GraphCodeBERT, to assess the effectiveness of combining symbolic reasoning and prompting techniques with LLMs. Our results show that this approach improves the accuracy and efficiency of automated code review.

Country of Origin
🇹🇷 Turkey

Page Count
5 pages

Category
Computer Science:
Software Engineering