Score: 0

Fine-Tuning Multilingual Language Models for Code Review: An Empirical Study on Industrial C# Projects

Published: July 25, 2025 | arXiv ID: 2507.19271v1

By: Igli Begolli, Meltem Aksoy, Daniel Neider

Potential Business Impact:

Helps computers find bugs in code faster.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Code review is essential for maintaining software quality but often time-consuming and cognitively demanding, especially in industrial environments. Recent advancements in language models (LMs) have opened new avenues for automating core review tasks. This study presents the empirical evaluation of monolingual fine-tuning on the performance of open-source LMs across three key automated code review tasks: Code Change Quality Estimation, Review Comment Generation, and Code Refinement. We fine-tuned three distinct models, CodeReviewer, CodeLlama-7B, and DeepSeek-R1-Distill, on a C\# specific dataset combining public benchmarks with industrial repositories. Our study investigates how different configurations of programming languages and natural languages in the training data affect LM performance, particularly in comment generation. Additionally, we benchmark the fine-tuned models against an automated software analysis tool (ASAT) and human reviewers to evaluate their practical utility in real-world settings. Our results show that monolingual fine-tuning improves model accuracy and relevance compared to multilingual baselines. While LMs can effectively support code review workflows, especially for routine or repetitive tasks, human reviewers remain superior in handling semantically complex or context-sensitive changes. Our findings highlight the importance of language alignment and task-specific adaptation in optimizing LMs for automated code review.

Country of Origin
🇩🇪 Germany

Page Count
13 pages

Category
Computer Science:
Software Engineering