Benchmarking and Studying the LLM-based Code Review
By: Zhengran Zeng , Ruikai Shi , Keke Han and more
Potential Business Impact:
Helps computers find mistakes in computer code.
Automated Code Review (ACR) is crucial for software quality, yet existing benchmarks often fail to reflect real-world complexities, hindering the evaluation of modern Large Language Models (LLMs). Current benchmarks frequently focus on fine-grained code units, lack complete project context, and use inadequate evaluation metrics. To address these limitations, we introduce SWRBench , a new benchmark comprising 1000 manually verified Pull Requests (PRs) from GitHub, offering PR-centric review with full project context. SWRBench employs an objective LLM-based evaluation method that aligns strongly with human judgment (~90 agreement) by verifying if issues from a structured ground truth are covered in generated reviews. Our systematic evaluation of mainstream ACR tools and LLMs on SWRBench reveals that current systems underperform, and ACR tools are more adept at detecting functional errors. Subsequently, we propose and validate a simple multi-review aggregation strategy that significantly boosts ACR performance, increasing F1 scores by up to 43.67%. Our contributions include the SWRBench benchmark, its objective evaluation method, a comprehensive study of current ACR capabilities, and an effective enhancement approach, offering valuable insights for advancing ACR research.
Similar Papers
Benchmarking LLMs for Fine-Grained Code Review with Enriched Context in Practice
Software Engineering
Helps computers find code errors better.
SWE-Bench++: A Framework for the Scalable Generation of Software Engineering Benchmarks from Open-Source Repositories
Software Engineering
Teaches computers to fix and add code.
Sphinx: Benchmarking and Modeling for LLM-Driven Pull Request Review
Software Engineering
Helps computers find mistakes in code automatically.